var/home/core/zuul-output/0000755000175000017500000000000015145451607014535 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015145455202015474 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000230352415145455027020267 0ustar corecoreZikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs$r.k9GfB )?KYEZ͖o_˖wKo///o}͛ji^|1Fbg_>cV*˿mVˋ^<~UWy]L-͗_pU_P|Xûx{AtW~3 _P/&R/xDy~rJ_/*ofXx$%X"LADA@@tgV~.}-+zvy J+WF^i4JpOO pzM6/vs?}fVj6'p~U Pm,UTV̙UΞg\ Ӵ-$}.UY82¨(PQ `KZ)b oHIܮW.z/M1FIdl!و4Gf#C2lIwo]BPIjfkAubTI *JB4?P6x̋Qs# `L+΀QՇvr3I-[w@ :ݖ3L0x.`6)ɚL}ӄ]C }I4Vv@%٘e#dc0Fn 촂iHSr`X7̜4?qKf, # qe䧤 ss]QzH.ad!rJBi`sKZiu}THW{y|*BPW*g,Z0>?<{r.:{]31o:mof{>{Z8H{UQśV{4ߙouxXkLFyS 5(I @:}ɏ`tAۓw](]0'Pd]ɻ_\-x&gGO_%k3ƟN{7^HGAr Mme)M,O!kX7qaYB ɻ>@J$tι#&i 5gܘ=ЂK\IIɻ}b{|;_-i!vg''H_`!GKF5/O]Zڢ>:O񨡺ePӋ& ofEnL!?lJJYq=Wo/"IyQ4\:y| 6h6dQX0>HTG5QOuxMe 1ķ/5^Z-y`)͐-o΁qGWo(C U ?}aK+dLdW3RG؍:-~<*KmrI,7k^i̸.y ^t }|#qgb2oII"9 1"6Dk_ȾcmXQj#3hEEH*Of äE@O0~yot3iYhKjWlwC[A)햖>r?tRWU1o6jjr<~Tq> `=tJ!aݡ=h6YݭȾju\0Ac/T%;m]~S`#u.Џ1qNp&gK60nqtƅ": C@!P q]G0,d#1}Uli}[H?)M"뛲@.Cs*H _0:O ^~řc6WK JJ5Z<;J_O{.Z8Y CEO+^&HqZY PTUJ2dic3w ?YQgpa` Z_pX)𳧛ƾ9U ^};Էڲ7J9@ kV%g6Q{jv *ruI[|A֐M'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l{\Jk}8gf) afs'oIfzZ,I<)9qf e%dhy:O40n'c}g1XҸuFiƠIkaIx( +")OtZ l^ZNCQ6tffEmEφǽ{jt'/#=( %X$=rᫌqpMl)QpL F2G rZ5nmOQq9TAQ;mM9pD6 N`sC4na~Uc)(l fJ>]cNdusmUSTYh>Eeք DKiP`3 aezH5^n)}k~hT(d#iI@YUXPKL:3LVY,ndW9W8QufiŒSq3<uqMQhiae̱F+,~C민v= 09WAu{@>4Cb#O\9fǶy{0$S:z4efb#hQ #_ފH&z!HAd |}p TRi*KsmM+1 P0W YW ].PK$Mj-Kp`zbbq$Igǽgr&P29LcIIGAɐ`P-\:BPS`xiP(/T)#ia-64#fڷbCVg峀%ّ sJV<XTtPmƄR$6~ :QbL2}q|Aq0m|Mq+ _ERƻvT񟜾[mm#?,>t?}=˼l?ff>\fbNJid % Jwe`40^^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[۷:>OM=y)֖[Sm5+_?&cj.i ˿7^1+]h,*aklVIkS7d'q N?s%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs܁dIv [@3YN<UKkZ{iqi}_ ּ};SN )ǘ΁ՁҺy mڜ]Lr*?Y0P`OCF"&FA*tonzwíRfj^-%[ R'l}jdX*kj1H`z8F5]WeߵcJ0TTƩ0Bly]e?>+ ђ(9Uq EmFjq1z9_^DןR24d%wfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O .|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI>QQ!'7h,sF\jzP\7:Q\)#s{p'ɂN$r;fVkv߸>6!<̅:xn<# -BȢ1I~ŋ-*|`В~_>ۅm}67X9z=Oa Am]fnޤ{"hd߃Ԉ|tLD3 7'yOc& LFs%B!sRE2K0p\0͙npV)̍F$X8a-bp)5,] Bo|ؖA]Y`-jyL'8>JJ{>źuMp(jL!M7uTźmr(Uxbbqe5rZ HҘ3ڴ(|e@ew>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"Gy4dl\)cb<>O0BďJrDd\TDFMEr~q#i}$y3.*j) qQa% |`bEۈ8S 95JͩA3SX~߃ʟ~㍖›f!OI1R~-6͘!?/Vvot4~6I@GNݖ-m[d<-l9fbn,'eO2sٟ+AWzw A<4 }w"*mj8{ P&Y#ErwHhL2cPr Wҭюky7aXt?2 'so fnHXx1o@0TmBLi0lhѦ* _9[3L`I,|J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")JC])"K{v@`<ۃ7|qk" L+Y*Ha)j~pu7ި!:E#s:ic.XC^wT/]n2'>^&pnapckL>2QQWo/ݻ<̍8)r`F!Woc0Xq0 R' eQ&Aѣzvw=e&".awfShWjÅD0JkBh]s9Ą|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧc/"ɭex^k$# $V :]PGszy iuKVMٞM9$1#HR1(7x]mD@0ngd6#eMy"[ ^Q $[d8  i#i8YlsI!2(ȐP'3ޜb6xo^.׀ ft~c.!R0N<R{mtdFdHÃФsxBl] " Δ<=9i/ d ␙F9Ґ)Hnxps2wApP!se]I)^ k?'k:%Ѹ)?wɧ6a{r7%]_Ϧi~ԞnZhubW*IakVC-(>Z#"U4Xk1G;7#m eji'ĒGIqB//(O &1I;svHd=mJW~ړUCOīpAiB^MP=MQ`=JB!"]b6Ƞi]ItЀ'Vf:yo=K˞r:( n72-˒#K9T\aVܩO "^OF1%e"xm뻱~0GBeFO0ޑ]w(zM6j\v00ׅYɓHڦd%NzT@gID!EL2$%Ӧ{(gL pWkn\SDKIIKWi^9)N?[tLjV}}O͌:&c!JC{J` nKlȉW$)YLE%I:/8)*H|]}\E$V*#(G;3U-;q7KǰfξC?ke`~UK mtIC8^P߼fub8P銗KDi'U6K×5 .]H<$ ^D'!" b1D8,?tT q lKxDȜOY2S3ҁ%mo(YT\3}sѦoY=-- /IDd6Gs =[F۴'c,QAIٰ9JXOz);B= @%AIt0v[Ƿ&FJE͙A~IQ%iShnMІt.޿>q=$ts,cJZڗOx2c6 .1zҪR "^Q[ TF )㢥M-GicQ\BL(hO7zNa>>'(Kgc{>/MoD8q̒vv73'9pM&jV3=ɹvYƛ{3iψI4Kp5 d2oOgd||K>R1Qzi#f>夑3KմԔ萴%|xyr>ķx>{E>Z4Ӥ͋#+hI{hNZt 9`b˝`yB,Ȍ=6Z" 8L O)&On?7\7ix@ D_P"~GijbɠM&HtpR:4Si גt&ngb9%islԃ)Hc`ebw|Ī Zg_0FRYeO:F)O>UD;;MY,2ڨi"R"*R2s@AK/u5,b#u>cY^*xkJ7C~pۊ ~;ɰ@ՙ.rT?m0:;}d8ۈ ݨW>.[Vhi̒;̥_9$W!p.zu~9x۾vC;kN?WƟ+fx3SuKQqxST Ζ2%?T74a{N8;lr`$pZds=3jwlL Eڲ t|*n8[#yN SrA GYb8ZIaʼn8 #fg3i`F#5N 3q_M]j 8E!@1vցP7!|+R@;HspSI]ڻCZUcg5pDcIϹ,oN-_XI,3\j ]ٟ5~' SuipA!C厐$&k7dmhz/#"݃,YqCL$ڲ`"MUbeT>Xuv~4Le͢ }UVM)[A`b}mcE]LCEg=2ȴcmZ?E*-8nhױ1xR2ϫCya` A y!?h!9yL%VLU2gr26A!4vbSG ]ꧧWp/ &ee *w$-`J\ ptǣC^p#_`{ К8EW>*(D{ٛ,[fnY𱹞M=6&$<,"lX-Ǐ_whaE 98 (oѢ/Р΅ 7ցl6618ł_1/=fu).s¯?.S[{'g=Ҥ):d8h\y6]t1T7IUV:;.1& ,5΀j:<< +Y?58In'bXIǣO{&V\DŽ0,9f O_"[l:h¢8wݓ19\:f6:+ .3}=uvKc ٹeS<>ij(o'ciS<{1$E[nP b?8E'xv[K+E{,Qƙ1*dcs_Z'407|qBOgYU|U--sG8`u! qGYܷw;ȌCPc_|(RaIBKb+{P.T! =ĦiTob d<>SHr][KqWs7ѝBYǭ~RR"p9dFg|K- obY_vM 4>/]e/dy,8!xŋ5 R<^mYo 3c9(F?h`cZV yBjmBz-p޴&Z'Y B6 [4 u>r,8.7uCC5`F %Ն R Cu8?28¢E Wi; P0:"=nlMOezR= ]8â6 U`V% CQX #'Nv%j1ܒZB$*c.)H ?`5[Z!]nliצ) ݆Y~ 1;;|,ۇ=sxy+@{l/*+E>0]8@*0)QsUNBS&vp/N6I[Ux8"EȿQGa[qVMSІ &}BtYhq ҕ  O UC~`[KaQ-Ģn9ѧf q:cTAvOFoY X]j?-ȇlCf0C`~ ó}0W@o  K[{d+ ʿ26a>e6;KK{s`>3X: P/%d1ؑHmM);W\hxtD38*><#/5߁ݛǪ0q,7F%IP Rڸ3x~ 5cl'nRO⠫R"X P]qɒKX9U1>V.)W%GX +Uvzg=im;ԝ!,Z#Vb ra\sNC/T"* Q:>9WNSUXM{? 1:)6k)|z&V* "9U_(3{ xRF(߯.y? bO]3?C!UݸoRV7¯8nq~q r㖟u-| +[~,9nY_ws]ȶM_u)O_x6Ҧ۾sp &el,\S Eb uw=&?ul_MhYR#=qO8?U} 1^up)3K4p)3XɯEr,bfbΣ8N,X><[@ ,&,]$*բk-Yv5 c۩r`fC`kQU``%NĀVecK[ld-O݇'kυ9G 8h#\h%1GK(Yn% w2t!gI-XEmkYkF}:~>(<2y. "ֽ} |q*ȃTh$52F\dEB n+'6eInFpǻ3A".mkNa./w8[H6b|M_wã5 ˈ$<-&`Ó5X!>V >IgsaF\b+s~p"eʰ(zZ=.!lCjѕFdpUna"db *75:&C k1ͤ#O Rۘ† Er/GU}APQT]|XI X]F Ŗ:ޔ&`+@,[3TTX)|*H'e*h0:VunBl  `5AgPF_Ѫ2)sCj1T.S0Z1:?Wq1egI+^bK?&#I3X\WGZ3M`AI.O6xm`Rs _Jt@U8jxɕͽf3[I3{$)ՆbG(}1wL!wVf;LҼé14jغgR 61SpH5S}t)eNqǪP@o`c/#r#v6*;WJ;[,)4\\=V~ׅbcK3;=,7}Ṙqvt'iI-|xRcE(8[ sXj~NF^)!F:ա+e/.ɔ0\lWoӊĭYcxNڽ^A12JQig7ȱHoD:OEUپOY>WK-uP0\8"M: /P:3`l' .Z cEpN9K19`ҽFpU]tLNCsyrFrcCbX%E+o*ƾtF*`NΛv;o +Ōa`HDN74Т C>F}$A:XBgJWqLhnٓۓfl8fp*CDrc3k.2WM:UZX[ckwRpA;d+w!e rr솜[/V`+@;NRy2ЯdSv!{@dlG]#>bJP\vc"Kjt-1$A$Dfj-ء^o$Zo, b(+'>ٱBӆI t[L>Mı/ 3QtU)!''Rc1-V RtzJ׽s`gv;̪ #`u0V<s)/G~2:uGL.xllT_oqqq$p;Ndck[ Rh6T#0d5`+A>ΰ-8sѹ V)L>$ćYIu]dsEGزM+ycF;bI,ti(6Fƈ^ʮy!A/V3uزEM_UA| m' L,CEaSD]<^ᨷ%? ڏ3q`U"oRs?]\MkǴ`+Cc07KS`hfCk0Q[bsK?l\g,\ rֲ].xYi$BѽnTEf3J!dq@zn~+CX|,l_B'9Dcuu|~z+GqȅlW11/$f*0@б zrt ߛ+qry?ڕx2;W`_Vj5l0VM_JL-yz0,Zlu]nc˝߂~m[jRuo[|["w?2YtVT F*OO '+EoWv4jwB}ѹ .MVfz0ÏABF3l?7-|ЩJs Va[~ۗ#rrFw/Jl?6o A%"CIMR;uq]_\_>w㒻,on[{Uw7z({Q_TRn )l?Wz;ju[܊ZTeuu Q77V"wx}[ێ'|;.Ǽn@6 >I m-8NJ\A څB}>Mh/NA8?_ξqĎ6xmPf_Sp2aMQAP*tLn$lz=lq"Ϻ/j璓P åiw@]>KkΤOq,*$U>ceއE)B:|-=͋A,z`S,J|$t;9`>th% n(Ȋ1{TT%4?md+PM2u/N7`nJ\5V-;)D17WU]ʰi'"qGULX(;JB2*,~Dƪ0@Fڔ^YUIe2ޝkǪ+aXu U! %RwTQvM9XTKg%3h4/4Oq A=J/ >ͣqrAX%́NI={:%e+ڟa^h 4 3__ie6*)eq ',iءrܟ7|a^/+-eqixMd Mnx ,}0Br;Co4iʹrfߖŸQ{@V<+s/vrWcZ?/<9w]ɰWc:vݟZbDY` ە<}wǓ\S=tg)^ێ]Yn`yʴ ;d/헤oj/Es(<=G/IOSIiRJNomlC]O\.tM0pY#y"t}kU)k sv~gөB7/LBwa Dhf7_E5jAU#A20M#Ͽmrrc9Bh?]$<^K_L{7oGt[0DH~}yeXbb,?f+K []x)2~wT!dnpyLU3%-QXxh ®CiBz{bq=H,(FsQ-*|1-rȻc,ּ ^OKE^ڎE躮Ŕm黖ہ*ieٌK6\5`Zf~63_IVy g]4|<(uw7 $Eegg~6DU!GXda:>/Ե 'Пk2},MaM+@yDi*y"/@DRD&YD$H@Iħ&(e$&BM kR*h4W٢7_Zs&׳{+U,g*ZbKbkoQg0+& $J7L-JdCJUk.~/$#] @Ƽ't6 SJ,n v[AD,2hDVy-bZθ.Zv9؞?qp*x1G-KkR3f1o/>8h0uٙOʊs5Op 96]ZfiF/IػK/ wU ߈I%ο2yD ߝ拪lDCXƙ'7V$b\31_*iWQD$Z<3-ENJo@ B0)OG9i% kw"4"TU$R'YPU`6'2"c]3hYVENSsEp5OQ,@*͓O"nvਢvmip^  wwk9HiC³_۵IK.ڿO)I0`ޝϊOߜsO*wgHDD bTK*GRFFx,rwB"x*hLFe4lommT ?V]yg<.Ti ӷﷇ\+*HN<{ &G'Xoj-/!Kg=&@9[T Q*N+"wmŽom]{oZFc=H<**k_ 'g1 x>+DED76=[wT**AOXEfܮme|i|drC1@kH,Hao(YΚfu#174w=^!"b" f`f ,."I#ނ_ol{ā$UMh԰j[eeT_hؼZsld/C&8ہ 4ǡJ ~wVx^d)[Mٺާ<3*L2ʻ7Ӓ7dJlpv5 ?Q:4$"ug_x:c:rYq{wEqsʕFmKNKuaY-aEdm'zDj۳#AƶI ycbnMD`E.]!pbSU*64TԷ#A*i.!_4y6`CiY(-غ@V=?&f=ZsEIOfԢw@[1k.{$U(:$;o_ K+ -ÓkIc2F{(rml+3uiuS3S'{;g{lTMy# -gl+mZvtD#\S˜p=ߦi}Y|%* V7ǫ"Cǁ{"H-`Oj*ӵу0[rqW}5m"X6hKɈ-`7IŴ`Jt*.iu alb`4Wѷźކ C؄N{-%aA;}iYDu5<ϖ%GC)!s-qĺc=C5~P,AlFK3nMK GJbUfy>O_}x)5m w$FRbNSSS^_m//^YoZG_=D`Xf ~<,AFgn#òӣ1JC1znKQMג,K2<`_P }^yÃkjtkhZw`r&X-\זlϳ2Mᅡp,7BOL3C q\;-+p!]e71@i35 )7n7+!^.[ψ@}[RANu{tnD1/wVmbzlҲ4^m"A^nC! <7eP!:-ƸnS);o$RsKy)[yM$2pk=z*\_T46e.ѾCgݼxC4\֋[GڂvmN|fir^2-!,U]~[lmIHՃw/*$_;CkrFxsἎw֕0:g|dq[ϣ@ V{3z󵫻F{(˫ PҠKi c:C1'˞\ܣKRacNk!zխI:)l4#!#A2C ;4u=yJTq[=~ܴD=F*.S26`wuxFӳ!ҺTwjj5TlGYW(c=̱2f3 Ö$ /ZICs>n}20@\3f_ +}m h6u5n&L`.V!'1d?̵~L'bm?n V^2{5/#?6[fZ30 ی^aaٷ@yfS 9k:ۃo0}L 㠸a} Naط b*7F' `i9uu$B8[OR|4kۮ0Yٻm-W;E=HQ@PG63ɤ]3dINԱ-d籃=dqRT&ݶhH||<.MUdD:Zaw"hV ]',Iki}\?"px~&V5ڤrȗuC.D _σz辂R)-û هsdzã7?c"1`zݸ(6o[ ` ZeyaR/0 \T`60pRh 0 Zu( -g" ֞ˆ`wBcad{'adkEL#ws|{۪ NV@bUs?@ .DἸG!dL0zA4n\SI{nWupkq6lN@lp/ ݍn¨0Iё(z@bn]s%CP#.nDPT_(X0ߣ(rؽxyWapVop#Y;ٮ*`1}N4D&:=/Pw)jpnBzg` @m7pCz*@ xs#F:A*`y]Io` |}[MP T=Sj:LNP`r`rX;jm8]!7mrMd[R˸]_QGN[8pv$vx&yr+PlbNk 7rFmC;Z}T1!sܷוcK_nbn;{[\~11l][=U}4ݺPQjohbFv"Hѹηc䂲:ӉxVL*<8"EHV|RA-:jC.W3?i`3M'͢(6KBPƓX[Jt 3_i2* ڝ\ZRE@9`%_{2|BMT `T`/Y{/Ŋ,X}"B /թNN3QiDDyc "l0Y.Yыf[sb#6ĉNQ^$6>S,xӻYV4a=9wB; rՒ$1l UGinЦ}ߵ=qSۄ M5.IbQv to.e%bt02Fo#r f4XzCY( <,i]ԨǛ5<4G83^G&!o"iQX; [P5JB_8l o/ZVP utJ5lj9$pt#|Ve0&\u2sE9'EK 鹡+UU'+$ (dψ#>;Zl5D / P$48,٪tOgml`cfִU` tp4Gg䆒B wY՞0ciwk~˽z.n~TZf$7ut(q=bWnq򾜧GP%xTZx͓נnmΒ(zKЍX;Q c:>Iz3I"evϦ=ׁ_T ,8؞C a>:OGyUû l:7_~subpHrꏋ-o_j{ϙK˷_594i Q{ވÀO@T4&R %0u{YϋA"{.#Cir# < {AynA+ R:ś4QT, ( n3l"}xGR~ 5uWΟ[vøkz/D'nrIK*Y0Rĸ`x$ ψr$]v20'8B=9D|2Oje:x`'[OFmrnLx$I3wS!,yZVX2/>i5Sl6Hdsâj?Lt G {Bk~8d! D5_ԷQpUa&PQ:P݁Z[օݦ}-T)v= 6pB>Ðj{֦ks'tPVX{B.€?Q{~MWp cÔL{phMf+ QD ytW){Z" GiP!c>(q8iv}JmU'6mlFֆnY!ӛtR`cM)ꈺxNPmERs4hPܩjE**MI/ y]Dou tPt{B~ 6t1; *sTJjdw}vftUj-uُTgGJ-uW u ݞPw?B u [%ۂPo{Bv$ۂPP B #ߑP B*l Be;ʶ ʷ oO(ߏP#| BUB- '4؏`GB--*#TH@.ylcR i~;%Uݭ>6.ѿW]NK88Fy 4їf|1'USUH1bDi ˴PhjS&}͉WK*$$_BϸSo<}ӱwdqϣlʺ ќAfEVlGģA_x}4(>{6yӠh0?O'GgpOLF1-uy.l,-xǩP,2yտ8'.yQ <-]t 6hY| S-Ǖũzj)giT^[Yo&w+)|^4-WiH>=dCa<_Q}&W 9 -ip#IRY~9tK5 L횶RhÇsVBZHH)疼-"禔2£ tE_gjʀƸ 8K2Q,iQ|xmW֛(/ǡC Y2vkxVg ˱ ήIZDz?Wodn"/b&f{;nu>jx1VXdxsZ)<%/$ZDs^]Řz"Sx@:ɦgi vU_KɳJ@5yչYE@"挊"AdH1liI!3W38G oeWr/E냇 eOl)mr .w{CEz<;**Y8:L ˹KEbːtP̦= Dz/`0u}B.\XjÁ| vO}b~7F%*?KلmFVS;:ɯ}Kӭ=iE~β|,GSlqm*oAׯI2֙dY:fkD72zmU?X#]O0 Cz$ ^r#D%u^{R6mƐ kqi\1oOVefRImT$gJKbjaZ*x{ جќ&fD#q ;gmnmK-`Z~ud$1n KE= 0kT6yj-N-!Ώ}s;AeC[Ke@SY'?5J*e߄݁|GT̉u7|C䫚4a&h9RZfm2\Jց@6nxAV| !Rݗ@8;5׷/r>WB@/3/Q.WRI='Mڗ*dʭ\N ,HKu.A'0PMQa՚^&#ɄnoaT1ul`B艾0QyB [?mf5 *Uʈ>p}x6Gut9"0r]FXiv>o͓^Se_ a ?xk?[/u` /]?gg{nkipur* R]J#YNR$ r믇M>k,:1C]uTY@B}i9fi9l+g*R+!2qI6\iLA{ہRUѬ/P1[M4>d&\|\HyI]Ezd{Uau׍EEiFK<04:Ύ{Z`b'G3ү;pxNdӠ(T!JyՒR<5p+WKC+k>TFbگrV4Gf]w o q=6misݶ1iv>6&~ޒItWdm*uO~ ءNxU~/Wy vc? =ןkq_>+ۏlWP}2p_;:7~7Ã\Og?x-q]eqeZ= ~鍯38pەyV?@ȟGq5/8qy5G񫿱uQ77vtD;j~c ;L 퐹3ocsS_vnz 5_o937:%N8=ٮޛn.>%g`n'uj{"]'6:ʴ+iBvC \V}uLY2:Vc-H"mDHbDCVZ iTaMn9ۄn@ە=sǝ2䏀h(N!'Yo98FE?P(_b\'A4۪  6Ʃ'F!XRnY(=4&y,vȺBY4?-k\kW9 <‰=k%Ah+-i90 $6#X;60R«fw[c7 $^,LYI%p44\SZ 7|=8l@H:/=yCc;d ;bV ^[pshʶt^ V noSA*_iSWi w]*mIt5ag Tø`ҙ Bbqi1E1v7 NfzއjThȬw5Di}iX40A@ 25.F`Kqܟ]wwmK<&\NtpyE/7O$8z7 &yD0,'8vV Ƒ>Nl1 RjFd/DhW9K{SyLc&= ИYQl)%蜙դCMxZ fN=i1dTߚKxQסwq;^@Do Tk<#ˆMKtө>%փR2-4&R2ji.8@H:'Fן&Qo<D9!pA܄)<зpeBp,}2FUa03Zf% N:p"v#JG9ՓnAL[_˭!Qbvq?x{?-ʴ&k[HH쥕$GN>q1YR~*Ip<3x̀% K 'ča\ w$(.FjVy̪Eٺ\N+rv7 $ztr~VC/$ Z-+9YI nV:'5f̰ՀFY-!jq5 .]JB7!OTaHE| 5Ee⢲e0V#H=F=v.Bra&Y䝸ϷמGs]6/M&/~ʓM⍐ls/%w;^~(~7 UWiUy}~D`KO1j/,)fVGӋQ']x9tr`.W;j+xd 5b]p֚67ۇĦg>,#~FD%Af@8Y-! pfh#<0@~d5%Oћj0QTxe*J*iIg~OkKGKȊHb%C Yd?ml#qKž0bE%y8xH̿r'Eȟύa1gRY3^Cl'#|!Q_(1 'tIݱZYV*# G%f57'O7!ޞ}}]Sf8tI̪\,DY-9yr\ֿۡh=Cht@)S\X.!K׋͇DBVjA)Ij\*-B 0௖T]A,Ww.y=l+oLrV0Qxp t ؗsHcU14njLC7ݳriJ6JX=5='b@)-Ӛ>|!AЋY-u~AOK=c'@ ZR@<€H>Ьs2YsL5 +8#rKmq@'y(`$wLDDDċzbV"yP=%oI睴a]hplLV}ir%bnRheqIg8>m:8x ͥIՒA ~_-͗S9j'jYF3?,Ip(6pb:غ^/ॾl0qJuLjRiH1<S 4uq}Œ I[G.~鞤.fIibSFsq.W DaV3OcVg渍 *ZW [&Svyq7Q*4(M:֤)k\Eva${8MRʅKN̒3: S娒i h}.X`-Lg%5WGgepi~ A;uD|xyff%!9v}_jQ0;m HV??L SW`vz5]~R1?L%nIap^ո|֬6ev;RKG<9tN21WNtq0o@iNNvq`q4]Qi5=_v*vXDk{5CƀIg 6T01=NU(W3mےpzIp7;x-ڑh[y:p?s^8\X*=VTKڼO ?J  ֒, ࢎ-$$2rVD9 qO+f/w,Wϫ߾idǡCh Al@샬OlTEI䟵H~q!@w5dr#Wni:x.Δɥ$G/Ń{h4ךYv ۃks.$Y#xG*>noϰپtٻƍ$UKbFr< f?Dd+J3.ݯl$SEA%zt\*/xo{9Fd%v^9VLݪDCd֙VL#Vkg2[*qrDEG༱@*3mVHwlp_\G}M>Te\23{q| LS͗)#8W1O~8.ay|Y+{ӞmKN{o5iQ,oGgEǻ11v&-E7g-(}^Mϓ%Bָ#r=u} s̟ 7r_KY8a1)̠7X ܭ(sJXVXp@Kq5jSy &#;޼PxHEE9M9)2f8\L.$V$+7T4P:T`Q3֘eR973q?8s0x4,]iј=`KR{@5Iva>=̮ݏ~U7u? >e$žÀ m`(y4QT4>[U1owsv[eUb.\jwqtR AC_qyLrAgjrx"MOWSw1\4/z %/$fOᙍAqYLNdcۧ,lb21\x8x?:Cx_O=-~(~1W_&>qx4nr lfb/w%?l-ztWV$zKj;fm;3/$Nܼ% EX^/@Kxqchq*w=G|>JCx!|iBYʫ wU.5k1/e7/{WFe?FE,'^fb38O/}pc]ZcGSǭS#T,j|1X['S8QwnɩH7ׄ}: w)!lŭ?s%8sl<`An~9%827W_7Rd6[qc/4s~Sߊ8#4ĥ / {w^kQ84tFf_7b<75!C#Nw`ǂv3E'+3 KQWT6ljWpl>6#ծ ?Š̾?;eAwX7i.0TK DhzVQ Mnԇ-|sZk?zt~_:'}Q{ॐ{sLHB9jѕ3VNL^4NAH*J1џnqa +MzwM8ΔuGeRݷc%ݦ1Gmi)p.0ɖ ǼۆVC*.iD t'v .,] xG+L;q)E`%ʼn lRdL4јtK[ .CXw!%RwJ(]Rʔ.Iì[4幫=w̭4l$fvIs,wgNBeU5XhM6('b0>T4Q'CH\ڋ(UӞT?Ʊ.?SvU1'381.[T*<筢s;G*7"uBxX-wP6DhtÇ>V/}׾.F($8cwH)/),> MpKV9-5)Nvǖ bՙC0Վz4VH-sm08gV+%7,W$,Y6ֽ,P#x`6Snِ{bCmbKk7lŘ.>Mh0ffoq3궉[Iu"bKʼn3a ?F&-v#/վAwU3bS;dNlڸYp;((2+ϨC"Rr+5qҿḋ;I Z-K|wd)a?`{ggJ]a- +3RЁlV'%cvU)),]ީ%ԉ\aBǂ> _[$?NЦa>@/{Ib \ i<%Xf|_{qRI#pdbӁK#9q)[f,88Jo] ڝ\љ@-|/#d1RϴJ{Au8N1Iz*wA<sj[ORDž9J{*T۬[xts4pm:*Q}堰ϣ|:."jmBp j1T0ENEnL1|+f+E1ʉ6>}C_,C7@gۉ) Zp fI5g}(\/BXe{Pss6/u! eITA"wyi,<B*krvrK,y%cY(J˃e-`mnV 뉡h*2*FnaW?e0Xbj~U1JȮǔw(4M@nZ|v95; 7_J"Nw;fwS(L;2pqt_qyv@+'|3Ϣ^.~c;FFy7FBՠ,&nvSKDbSU%X F*O5ډz><#I`p5n{M0+߬ؿzl\jXC˓(,\5Nݞ8%ݹ'.Öqb^0Z/,ίӏER,N}C ot dWU~lboLyRn|Bq*y5NNfED"Ge9#7_~~m6qɝ7nZs?(*L1:*)1{{s8FHa=tæWSkLD`86wŊ s 8J6PkH7BIwTk[.!q'}p%;/"a?tY`DT9٪)I8]e_:m$C_ZuW7{3|v׺W @1{6mK@2]oI-*i5Ryk4N)2pa Py8b\}Mke&+"! m=j`EJ#xUzG$׌֐0J gq=~3|_ѥV''K&:i@ #3'c}r&\1R)l4nYAR( vߜ.wg7˕sg.(+1SaTJ}@ɫQOVqVGkV}Ǝ>f1Y׋͕/ŚM&=&w5ՊVEسF:5{y xۦDUt${Ȕ` #Y1˽f2R 7[H|6@R*8`AsAÅeF+Ma2 R%Atop(Sd$9Y3DN1#N\hp HN'"ܼwסV0*Qd,RljaA!r4N氘akKaYZ-/!)۽mLMBRSJK_v5լS^ `{ՓJqzu3.۳Aր99Jp18(PV =j 4Hp{ <$$5k[:LnGujbJ72:`88U1+Lf\p3RлF#'3"f2%1K`a5BSᆥV$fiK5Lp,ĸ(* LDF5xln&h5\8:8\nA%J%P!љL5eȸj& f2HdίHS8i4|Y|#0^aRp/&]ԓ֫kd;jzj&C>XFh >q(y@l TqҲ_ܠqd]W#zHV.}xM#Oع$=gWLuL݊sٞgmi)hb;jy׿A8 Uhv@@ WxZ[iV_o^|=9ڦeQNi<|__ʯvtGwACt>5<, %#mlt󷾕糫l G B7ye jз<ĆZ*ؗ ]] 9ӌ4|@L8\у3vc[D0c?x>qM7v>z16|r aJ/!+PNgZpLw0To dTiKKHaY*02 #Yx"J0+t8UʾHJ`8]RM66SV تQarFLK ZD#@XdiB2ԈT4@L"-JaM*FElb+:dI%R8GH#!fB$7 cFkƠ `6bUJNf;YV0_ F&:z4cWwN>ch| V1BCvbBJE9g|y(z\ghiG`Jt`w_[`AXw'oWw k>vW#{42܆peG`n hI>vKjvm7J͡ýQk ]D#,#T I\5`C)oe&J <(kiM-4 ,RA4:m㔥H-<*Y,'YQ ,Iv wc,Oj$̓<8-k;?^lX1 6 Zu؉qgYTd:cƩ1LD ќg&"& t$Dw&0A}ZXo!%fʴ4=bZi6%0-qUMh롚5]!-"&(0&5M\c.?`~/Q(Ü܄+ܵ\ڱSb.yLv%pWj /WoqRvoa~eq x]U1D_᳿ ~R\,ijQ+ÿ)*2j, /g(p~h2@[_7݊OfN0p&~2qǂkrA Vq|_  ߾ڔZO8FXD G(Evyt5ܣyzNX` vRu2wf'C'|FU0Ǜ 4bؠ3*fnލ99& zZa.QNguiX>Am!F3r|`%ے"7kGU!&bMu^|>Z=r::.v耓b6HϏj=[ n#- ߭~FrwCQwʇR&!N~W, ppppg+\^59H-N8r ?C-AdwdTqtW؍`rա_3~ MH-m#fCm&%"5˻=nVk+nPF 5S%8ՊbNW81RXT BTJ6.| M5۸XdR·Jt%3?~ÏSt&>)}#7|1O(}̵\ ĐsBLTED/~q&ւRv/~딭׉pW/g#F_ټe}ρ@3C:MUB5@T]gQᠼ&&\uG=NgU7&j]~_h3cR8=_OnK[;g󥿆˜7˩4gWA kJ ־V S`lH`\}l {0~֪KRШ֘bXJ,EH{I/֕[&˰^ q)^滏M[ \̊ĀHn 7ПƚZƉl|$`>&tc].MT+q:9 0B^+DfɳV]APx-ī[aRU+֓ ob1v9 $rv/Ϊ~NK7L]`cjv| ['OZZTl̥iȗIFNMҜfY]:`9O8ZO k)V좷)oV)q1M'&6m 8tIQ:>A~AQ¼)*68h?]|I[elbi VEv ZBRTow!^NoS X7׳˵>*f!ZT z4Ë…R{7lq[u;OcOrC c{ńɔ?o\,I"N&vmp>Y^1cY¸/ӏfұO K\5.$kv~N5IA\7{ă̹T0].ό,(4g7:}aʖ~`j?c^[ ԰PsۘqRضYuۮ?g)'ތXe|M+߫|Nkn.23)lP1DO^jݚOr92+E :z+2 "n=d:vta)ۡ`]*o1N%I$dRp 3JĮwK2XR;cjæ4L `!3ZڤMZشrqٴ.f"P2u,b$ncM:mTH^$W<?'>'yA; 3#5hb؈Pj2LX$ǚuڨmIٓH=\1R0ZAMBjX0pGkiBzwȮ  ""%d5%ԶgS-r[Rjo6MVIb=*r[?mCj񆊔$Yn$ag fs+I.5BOָex9]4o 䵱Q M2HI@vE^گ"|Mɼa2;xEBғY_PGCAoO.hD8Ru%=?NDsvkD#I yO[|t]9fz پ#ČR *V ?$o9o$.! OCL GCoHO( C\뎁0D)/l"hKg3Œ |)fOOSEl.QRx,~+m&G\JƐ&E$c)4Fp|WFFl쇠Q˧ ו.~Wio`LqpȍqMDGϭ14 W1cvČ[>Q̒L(*\r )|Xd8`aZ>U\k%%B}%2:Op |ƑWBl |P/$|yu<*MHx}IP2͒\qLƻd fT1Gj/sphC\BӬA7(P'YSQA*dPTXOm,/Ai:IY"RROlE4$SWnxF_mOiOԉ!/-HS_Jrf GZV1;c1v_(U?fo9+&}\>A]Z!IA!X7)Hh"^k&hd3GWAjO(iUWӣ[q- 9~.EYQH$v~ N}f)[O-b_ԢOimF`R<*2G]~}A9]9PI ~GI /s{A0n0R5BȈ|p"gB%UkG\^%!ԓH hA&'2p2$HC%m**eGu WSkϧռ( 2jRНCEO/[P(3lAm816g A$jT8M}|Pglqt{Ր/M& c&hBd10LO9ӑW&35#1ZUIQ3ynt`e$0 MNc>AE-(r='rZ LlG2-|(;Q>8娶wq]_0C Xn4SIgb] NfƸ7?):6 uB(:S3ݥؔ󘳼'[165Ao/BVwmx܊Uf]HwFύX=ap55`? /dB.ŽIdqYoQk5Dmlt#w<*-WU%j9JƟzI6KJ]jF}Vũ.r&e")~,A?OTYy:X~Ai'μ,*taf[\Xw` KАiɤHJ;ɜVzT`9+ [й{E n#fs jT!ۂ`Ӑ !XrA-3Q+rG6%y ZT\.Regѐzhc Qa+ Ҹjoɩƨ|D(fG]C8$U#Z(㈱`r vŬ 8GCd=ÝP3R ŪSV5;(qj[RDh _!k}ϠPMа~]-h^&G3,.ӯBu㱙F>:.ֈ&5>Q|P Tl"3Pq : ~b-wy\l[tb*mϬi ^gE?qL!{"+¢DzIP BDP\sv?~NY>UV2`|/J _1+S1[7 !]!LM7]} ) C6T=[0|?ќ`E58"I4Il7ۣO&5 de=Qv~.*\zY5M`ΊgStFzV#L0cG E =Oy%Zϡ=h*D1𨸘Qт lWv(P˷]x߶Y  U'E~) |ܣ4:Tsz4٧צ?Za[TMpA9}5.DMg4ZEBn8$q$00[W"jT*'Ԅv&jQ025Ggg)n烍hrh@L`t> vD}bGMJ/e`/G{챪Yw =tM~ef4UH 8>3pGΫ<}"Ũosܞ#.']qwR钘Yˆ>TI LUJ ]ρGiKzI6KJE~v@KQ4PLN[#^ +F6'<ĻT{cհ3d55fuNy Z0B$`vW:߁L,+ٔ w"fTz9Q31.T8CiH֏ixD**TuBȔ3A^z|K jnv[iF Aǹ+rkZ0]~EUw}!rTa];RS=d(x\%f4^ ߀:΂#Fx.jT;Z\_;h~_f~uG`8Fu1O kFrЖH qJ/}` |]Z|t _>fi5Wzv.$| .90E3"13Y^|&bk&GLIZILPG dxh$&Y'*ZU,,`!*ȮH9rMsDp cN"w"1m?z| {}=0+   UC|{ 1ٸ(u\ +rgȶW_g?툼HʸL"t]oN96\JCx>08KyZ+:ޏc}߬Ā.>qSte=e%EΤL^=ьrKNš Vk;Ov&]s>wtړunwxCͭpi Nh A:'ևeM["+u{B8wմJ 5#*`wd8طoօػ'b$<=>KTV7b]y{ۺ"-vGH(n[om{ R^8kI[dK-'w$3p*Iu(W ([x.? ]'\ټJVbwxያ+`k{0wӉvhn ]pCm  /,t>FNԣݳctϒ)iވ)Rs[5FBX>mY$tړp|30,oİob<ީQPsEN`:OE4+=NѺfg3ꈖB*o3nYƒyZ.哋\N<3"3e,e|F120?NC4lx|9!@6}ev,8~gI;B2-!gяW/A_<9TgmNxK:С*iQejMgsz[jUĽKk=zo:9:9F7("20a. g*g4k!nU"Z?XG% 7(~Cٌ܍/+_}= ._&a+kVCUE}~CBLf~j> 4'@0~ x=~U~ifnf9Iz>ߟOfϞ{Xg(y"_ϖarI~jqj>뙲̫vprϝQόԮ"{c6r}@'? )^cuk2߀i S*M ʭSHdKoT9 ^*wZT:l>51|Ւ~XPu?G:E(glcU<[O|vx[?K* Y~2]'ӧakM~:7揳]z73;v"kVMU#`_]L{h&eM^ v[ pXO;Yev6m^Zo\Ƀ~ɪNCzBem,ջ'E&Þ5Yɰe!߫LO70I6@ .")3ݙ6٬d^-KnP4d6p@q{#Q-=0[alIUXݼ{fFK^60_W_t|T"y zd|f_ `wA_e|ǎm=f&k{%(7˯@_^{j1iUG% a$)ۨ;l7wװbfȟNCZ/}f03ޚag-PݨAznW -_QC|=E1f7N:h>ehI9 jbE1d7唾v-sN<Ap4|V:xQNσp( xdh;+tޯ7έfM{C7>5&450$:XI4Gm`,G2F(4Wc2qf^cw<ĺ]\g\HF2yN<03~<` 2QWNx@kzoE`FmCHU 3e}e`8ߨm4j4(/pe[DtoCOEP~ L+/0 j<0V6TS?y_5S,Ij $psp"/}RW=]{%ЖKu :\2ư\ݣn:hGg"Cs"u`,G2Fh3":-3!LaܹeQ/X^-x˔BϾwxkl.Ɗ]CbDz\EΣKg;4G~2(߾ޢS|#m!bĮ <ij ) "/A=yZ4'sS^& /}' O#JsS|w$_AD/P!^JeonLvwbY+Le!bgB8 P݋6d0%=c%=#YŒKԽ w)C eʥT1ߨ^hkm41QX":yF8ޱ2fٜp MKR'>)xW|y\ioy ZǪT`pzv^ Yi-;W ('gzQ.0be/#eFvYEx&L.#o2;K3u@'<A r9_$SCГJ^MWo5۵? !-4a ^՜2RrXk\<=)ϳRo;7&8V~#u#AC*y@ e띜v,a%|=OEUTx4:jLƌQkcC4%9-{DsT. t;o'҇ƊfC)h;QJyO=K,X:n\L'~V+ +P`*uX.*feU tI?ȟucb37{-CƐ4ju:ooq|}\XɻdY׷U0m{~BԁY` CV,1YWY¬yFØ%0e` \bxUg Mr@x;[~\U:$q/YS/]:K%YRd:YUɼ܉փ7^L%yL"!8:;{Ys} ; c&]+ Es,cY%3v-9ok 2ư!3"ƯTIH!5'2#YaRLb6 \jJt: g؝zvlgØ\DLG5gD8 b ۣ2sJvS.!6)%%0M1]&޾(u_Z& 7v06f5WEiVv*Y\lɇclCji#6œ4j:Q~&)1d11{&J)Fw{hEhGQ_xZxGƹ\TK:sZ>8״^bϤ-v.,%0dz$Y .-\p̃LzRzSuw!xS?# հϊET!d^k$Įoκ\2sj_X@y~+ճ;Zs=~}00]a=9G^>}sK_7$I"3,\fJyZy^vLW[.OhGSi3[6 eΔJI}Ax0l>Mtf 916,|wלn9ueG`3Sj*+!w:$a/OӈGQ*P@VD۠=W'#g,buMDmL 1 8w;eP=Gpw yOyϖK6d6T1o.e!$#Zd2uBm4(2<q'@)SfE.+/E2kT)5%YV9XV(HsUP1A 6?Zb';vPF^uo08: gLTUۏee'zh0ۇYzɫJzR)AOF) eA`ct؜bs5*:bM>t ptzÜԔݔp&WsI葃-95Ozg.}Wz{VOb\,4E{ʩ!J]e}WM> Nng $e1l.% q(IX2a%ujWd>VbwܭWnЈ6n莻48rUCo,|e!OOaRED]-OYgLLM =xzn)=*at7h'< w-ig·`O}$&H"3232$ Ď36<*L&{Ώ+{.SGY}M H>r`uP`k۰=a:Oo;YSm/'sBF|c5fYٜTX{-pbZ bp#8UmuFrh׋Ҟ(:뱭}= d z/rȜoQې:S@'O?! BpZvcZ8ˑ+Fܰe#u17141hh!ԅv.x^8PKuk0R1rau)ǿx+SOI~;=aX(f F)\o7Di0J;Hߩw+>e k&i QFhDa`".n!ĒQ9ٍ-h o6fpʄ%'  (יEZ,@QXrzt[`Ч6O1]uWƕ#نK^b{CטJ%m~76wҺѨC_H.$=SL&L}$'$Uc `'cuه8l wn*}wF~=y&~ҳ_PQ(㜮e%^ǵ.hAm !  =03p6最fXLC͉gs_#8 )@6H:K5,W=TɸX|kh,ꫨ<~U`.HЗ+$ Vĥ[Ѹ }N* oӟ p wET$|lHiͰC\CR[K7Q3sX Mo e10>Ouj~37p(sSC'/,<h8<ୟxרF߄DV[_9g֏,f44o^9n ;{u9V(dOn~i)w0Il =!n]5_]$@Wro:D/}mzsXF{Q,,Czi٬~}gݝ~XWA1 y5WT7`ȟ<$VaE?6ʛR) Bc   Ps?#} i]oFߗRd⭢WJDX03H=-.v@oFܒa-OE՚asLX@Q8C(d3cբaI>QG흾yEdgVJWZDS`ӄ}2AňUףLI¤ RB*HzňoLc|$%L+ 4]qʧ)cn׊/"ލ֗[^U֫Wy|۽ܧa+fEHd:m>Mh5W(r,7ɻL*un\&|4qBRJ ^>|0t8fM.b VRe2Dy k&1JIa9wn1Pam6+4bYk6,"pPA}&PDBPB]&>❼ af|佼= ѴW@A&w f]VKQ |}&AƜ!:lkc J=ͭ WqpuMr]1cCp =2VܯV2: ض?nM`f2 %NRP9 AT&vAY|Mzˈ64Q&\L[YsNpԈTjuX'# υa7A/Ƶ1Em|1&19-NG ѹ=VKsGRyD P[DZMIw9䓐{uϮ .O] 0O5꺎8s-!̥NN-&ՌdUv/#ػ( l/Jrڲ*S}í =UZ p˿ lG= Mv(B%M^ʴCߚC:{&9˄)^cF2IV܏11~TT#ƶ`z@'1C1S68?Q(?`;Bo NgK/rRXWn2Ӫ]3{r T]/SK8QR+Ӫݓ UhIxo"נ~Y^x#0ehL̟C|L.k _/?PJ^C_CCW`xqi0*OWm٭ W47; _ov4 A{U5Uްn =詆1/F Ë{K`mf3-fXQl X-q O, :ncίm䥀'0@A`e+C`=bR%Α h& R  }un;3t։>#Z5J58Gmm0}~ }OSauD$pé,*R'>CӪQ"Gp>cENd4g (z!^Ӿks }/ D:X0GRۯW #0+!z'>Q#0AaZMpC3U«CE84W8lK.%_KkdWЃCSQP3a-G^rF ˉKLj˔>}_@KdqZ;6עqǿ |:ؼ ԓףRR|!eIIqPA׫"Y^f/.M1m >jF`큈kv }Xqⵒ}o{1-#EDHÉvp>Cć¾}Oq6}o{ 2C/(P@9o;1 u%! z[z:Fbv5TچW' ‡plRKܱ( |vk)(ŋPKyq-Z-Ғ̦9Ho~6rnX*|7!LvTw6}^T@[I]q Cae}?'es,锰<7Q 8C($-ܠ6վϫACG|2w9n3,&0uE|er7PnV]R3Ћ8 K$2;$0(HKD->s!0aE}냁lB׫98ֵ 3磻hH`sҜ [prb>`V p gp]M5 ++8ho|6qkso58\d="z;/85. /JRˡҢjOVdS5+hX!g1cjpn q-6g^rr1S<_ ?-p #fpyf/̏0GZa}AfZZ+"Dt0jbHwxp B&0Pzܘ3=TӦMpG"14L=wr8"$>ꆄXW-"jp548Jk3$֗m{V(/WND| SԢQrQ2F@ 8w+3gւ zxGfyO74' H o-"ZH"F;dTO!UV}˺ 5Q:(<lR݅Ng^,aYL& 0RpmCe#-8jcjNnVM?1(xP/*PbH.EP೏Cbl(B&Fe)1\ePEO>G a"voA41|%b%BsHQij#*1Z$pHqaRC@s3&Zgh)5cD 4ʡYT~b\"]tN`lrAΌ%>'CnG@=zGiWM֮gdcbjnƵJkW6n^@ }>qy4Mf2fh|ATr@ ";`}ةgtj?ݚO3/^VOkh\jlG:}opxc\U|WISJ7Dž2$o##+_ Ϻj(.7y1M'R?zC_4zvk 8ӊҩ,#6զ]t?=e#XpΊkMԎSt&gj&ȸ-ZaR`M%f̭SyitY;enq;%{Zp.-E?mr}WU;׏nF ~,XpjM;yvcAÑ{g?L&@>۟TaWnVԳz_]۫q\*qYz Z,]$o|5VR }b}o<r3=MM ys`M;WdzaRWEk]-ow›E3w?o~zx4.j7{ӿ^#Zՠ?Z U3kukyͷ'kY Y8'O^/LpM]U{%t!BUJEQdJd>/ٶW<iϠTŰQFc(z7nRJ"zx$'4pXFb[rcL90 040dnA@#m .̀sQTJ9d:z`0#s!٬mZ`ٍ :pjh jniq!UJ4d[ J^ؠ,ORH2}aB"N)̀pyHtGbpt7ÕGZpˣ/g?>Г E.E=IK mHP|6PN{#o,jOHzSLB4OM[.i]$#3 kVъ0U`&A(+#ǙiMR )~b\">oAV;!wH _UORO4e iliI T`~[k.UpQ0(acZfH>KrAxݯ˜9X88 @"Q.j@j5BU4qdqYS;WVi] M>[%wzV>]Nw wR$1<ǾMA?-(8BWZ=nOfZi!qPM>1$ OR=BtH N}tUCU#18 ZKhNP&YDȑiWr(6ĐDȗ9v&DvF`OOnи&F!By^9A#bn< ^xZ8H&LFlƟE=1 D?pekۧ!zx$4=+UƵ "]@1D`f`eurL]iGX(A4"QңpmZ>4(DޯOY+}C(,(PkXH=(>Ϣ!>B iF>`((<ڇ/|(‹򞍸wC `X.Z:Bg<45#"1+P - ΝxomC/^&yδɕv6++D)*)%-9̐+8MIB"Krwd]QcgMOW=j@Ѯwv^Pa.@l?Vͫy*hMv{>ys>;LmL-d̘cPar7KW B ljdl>B xd>B (z$|@k=P B ,Yl i.֝AnabS/V}yUF<*+6EJFC)(2Ǩ"Ts<.uv>m#%.ΥdReeX2/~2RgPSJn+Ba7z ݈k}p/6n-W]|/>,W2Öd;}L4 )fϖa<_C/97RbK2N;Q<&( g>}RSX[Y9f~zMs(Z줿۱Irbc:/eiΐ/}Vt(GL;oceT_9e;l+ oP?>dYb*6@Fg\mt6:WFjsHvrUz; ɂ%U#390RqԨQɜ19d_U$lO_|(IΙg,4rX2.PyǂvG6^ut}EǶmy"l)p6vjrkѣU0PZ 3["ҢKkl}]f+EZ՗oœlM`nڝkkX9SU-ښl6r?[Ì`89ifEF;fݽv z #QD30Cr 3?`ahzB@,Ei13/Ik[k%/-WFANv.FpH&9IpyVJ7Z]m2sŕ/f~G{OG0.Iݑ{|h*P-}|7?ǏیtDU2c *qV;chJn$ҘrK J^_bߗ-w+XF1}[V?ێgg0 ߩ=N3I˷'aٻ޶eU6vhڢNN>D#i|~;(J"%ZYHmofggq.882 [a+6lچ\V2TBAyRń0b`bye(#PpDBebR*2BaXvJv~ fn޳119ӋXARoDy7 {ҘI4UB!a=!2 E2T{ ṪNg;_զ 0*3wNslE4~48iVxI;Mf>ejdn䟳dnIlΥ[\cc(F)(QicC(8&R{*-lblh@V@Ɔ`ZRE8X|'xIA&of}ƳFmT[XP'Y&zzЀ .xŰ6 U{kvekaM2jjSk.\M_LέڂY[i{Ў(3Dȶ'N-4PJDW߮ ou->)LsʹkGIT#]-Xܖ4Yk֞ygKTuq(wvDZНX9–*`}&:3P& ~T_RKx3ݍ޻  9ɡzNTtꕁci2fD %@tm֙m5d;nw|6i}lf/RYd7>Ekp5ܤw/?m11ᾎB֛4ѕ )uBH ͷ)< "@.M{xS{1ȶez<u ]*;|]OU KծWLr ѡ{Zg>E%&+?'F2oP.6Ҫ3eT( 8_5a.ùY2l%0Od=0 &G0!2t@DaI$nA fρDH7ٲUDEvfl>,ߘh8 b` {ݳ=,tD[wĀF}*D!\&z`f^ލ&i^|w:ȣLo|ˋAt'OOlx<]y,U҄$h !KTJS&""(i#҈!NiCRTŶ]M+%!hEPj MBk IAx\$)$(*xy - WqLNV/CD\G KRX"qF%,JOdbXugVPuvms|C[ڶmgmOa LޛbMB,S:ҖV$J< mPbai:Uj6"OG0jsq$f]mH3-u!L'A憄?:Ο~usfd%WQ5GT8(vm̲ҹYggs ݖt md=A~wXc,&)MmS&i`4v Kˮ-ͱN<Ӛ"E'7M|r'7M|rA$7Y ӻ u\%pVYad; ;@a<`HKRtXc[ԡmנ̎fv)+5&Nô;['_q~#(݀B>X=Bz?}]9_,%x} ^oѾјN_|_oƯk]A=E=E}a_DE}a_D=B8UJ|JR@}b_W)UH,HM6JQ!ᣀ7>_DD< p/ ܼuUGVm@*EITMR~NDDhOJ{,MpO1az4`*+`7XFJe9&Q(x:q?H81-b(Zn$snI~ne_=34:ڼ!~06 "#)\֩> V{|VU 9K*[v0a8 4}0W}~xlE÷JbsMM[>`}سGhaHoa9QL~8htן/`雑-2!͑:a4([iSJJ&w@a|ҵf`0E:ww$Zˏi&}>&_쩵w}~`X)RISlR!! ?4_$ Q`&`ɔ4avR|;;5;5wTod &Dvpyd*RF*nuJ(E6(bj#c7&mb+ 43}!ȗ=#_~ȗ<:RZSyC/?C/?C}||!oC/?C/?As_~:_~ȗ|!_~ȗУR y>NЂ]*(ITs@'2U\FDj0:֓L&YӦk5T!Ctjl8שL?,NHh|Dr{:sPv;vޱޱޱ뭐I8]c;vc;vc;v=j4%GJy~G>~G>~7 ~ g?ُ|#g?ُ#ԣBb}#|#g?ُ|#g?z4@1c{=b:LƮ:׀2&Hf_n:p\ˠqOp4K)2=3I"P$Cx[# Dv%|pf9ڷk@Ở=yaTf杝梪R!T$%iQOcm$ibӠ;(AAdǫ}̨@`{a$gMʏjX|Wɶ̍Z2ڊKK{\ {ؕgMڽ}=m0^0c}8hPGZvYae$WG ,(Xߊw%WE5ݿm1l2W/kvҡev:S\ X4"=(4Z++7\0nty?jl%:~mc¦=T#~Yb5J1g.Z0cq3p?-?r?;##! ,}P8[̆5Vf@3G%& ]uOA5ZfVPdI. /_h Hf`{k&)"fI 2~3g9屐f9䂇niCWjm'ЊWwA,Lc|w.(D I>V](~t+I:Z' *,)CuOZ0xc4aNyX۠CO"L"\aJIqJ/]_ejR.-CwyeX>|| i$~ ] Mfo *%X@Љ# nM)Zm$/w&<"`m,P'[;+j>ʇOoOEWSk]moG+~IrHB@ d z{$)&)bU !šFqI$VOWW=={Iv./9SZ26O[҆n%t[QV:m TfQqݖ_и&zHRXJc: @S& -m`-V+|౪]$;ad3*6`[Q**G0cTOZ' _Nu`Y@$K9o)eDiJzV/8- ,S x<8v4xE^Weyut'2AWB\T"PJLq 3c@DJ Eu?0@%]7bۿ N43DN-9M- aҰ!Jb,aFx E L<$)(pW緓4Yk]:N-x o KB uyXp2U94Un`[m?Tg"D94&1,0WIq&!V%'J;HU!S!"KRFIyjb&1^Rjf4KX.s|sigJ<87y?&LM|?'Apv-GA+V (v|7CߋE^<5+;aơ4}d.Bft1Yoń% q*FgԎuCV~7 vx19n0(bVU$7CvTΰ^p>BySDF Ԟ$3q#:"a~" V:}_\fuw0={Ye~ `a.$x>LPdOdr|1{UnwO,L .u~Xesj@Q?ߖE*vz6ݒӋ,P@4nb=>eq5aʖEYm)rkwT*URA^|2 ڈv?e{z \ 2gv!!> r0s}6T{6'o"4Ld\,?$BJ%mSFrkc _زܔ]. 91>OG ^f8^*޸('4c7Ko{WRY if/F}?xIW(tSU &,oa":?f6jG8f=YٻM7tVY+Aku֪bXqH8c'}͏WXV(dCB6C퍮A/^:|ݛ|%&oo| o`yqUڹ ܫ?WTFe$s4ИL~~r^&6ܷqw1T>,Ly)?+rн<ʸ7|Q1 ѦFa6 TeAnRԨW&\Q*&c@cM$L:0o s^$zΈ)bB 9ԺM8|}J>nsa}eX)RIS셐V)C%V"R:y1j3mX@#a?Ӊ}:.EtߋA{D7clPD\+][5q+!k I{X V햊4"m%dˊ4D`N!)G&^2Dk'rYfS% L"f#Pgp'Y)Q9S.%EZ@ZĞxO[F#Έ:%}KI/M}YG$qcr :# n5+*^bN{_2sS~~jT年8]-6 @a?߯ïa3EQ  GPg>8P\UA8222,΢; c+ETb8q,3%pR) ǹt)kֆEBofH]nAl?P.puG t;9Ufՙmf[Ş۹5u@b/n)ra> N\vN0DyrE%Vp?N0wxIxЌ.&gH2b-˻]hLޙ7ȚdiyJB]xG%h: (a>3G^ǢB&80b8{/[Eٍl׌xq暉 (e,$] Mz;#x !*}zaY#`Mͮx`}uXx_#iߩHVe43ų 㭐s.Z(p<) 苾7ݎxtV`zR y3DJ8a 딁 N8!~Q,ʓhp^Խ} dë<+Nsh8K3-Yn*ooQc˔%D*:@[(=ߜpϠ]:X\.:|Կg?w\wRlċήI [#B"[/}g8pN??ɷ3c3i8؏HӰ 0M#B$dVxgU\EfDM}B} l6?THlD9ן^n_<jڋw޼tkZ:w;.O#2S"o  78 l R:F8x_]Nܷڧ2pv8&1R-[Z˖ֲli-[Z˖ֲ4-[Z~Ӳlili-[Z˖Z*ibZ*ibZ'L奭b{R]_9noyZniyZniyvD쏘nyZniyZni>b\M ,BXw(EuEvpMz}EPRCSvń8;ZL.#[h(^E'fpJCpOb'6먾ZT| TL)az(ĦRgnd: ZD)381 Q ?aQt 2qUNDOZ:t.*ډ+Ď1jO@ѮDc5MCzAg}/-;j=# eW:Qɋ}st_·]F3 &b]jr0v+&q?_ovc1?o { kRs-v++;oxzuI"--TGz4iۂ 0 =wfZf|O$vCяmzL.Y~l}go eT`\axQgr7nXV |Ymfx,X_&oGQ'{<.یTr3rxz,bz ̇nk> ֋p!k, m_ ׉lM7a\x r"jH~Ud~U+~&+rɮe W}.\GNvߙѳ>0`gEfJ~6nAzcF07 \TrBs$4US$ڒ|gޢ|wt}K9A۾]64Q:X4;ZLooؚSwYsd0Ɯ_M&f#ciR1cljW&NLq/)&zuD{P3vMi$z S;,G)Tq  Ev[QI5Ԣū9=$9Λ~غ{`t[}ܳxڟܹI}pqlG QhpTla`Sg;ۀ|ةŒonnvK{l#_=ܥS|Fw.f̧N"Gk*O8NgH8rM#&l%CLN*Zy5.hj#xut6//_ tBܟKXJq6,p}cI(kߍ?~JR>匂͎*f8#"/kT{pܓLJ'HPTaN4#q#( liBh,YJuI"] C1K$%\2NI0gYJRðᓄƚIiݑ zġc05]σ d̍s__o}0!P;G%'V[ʍ{ꌷ*ZpϨ@'^Z粝L=ӿ7 z/?lk ubv:km9kFʇ: q?dђ\?/nx9g6gWg:x=4;ѷ zn/腚g'sz.1İzX9 FSz#6ҿ+FA֔7B>JzC䇓؍u^GD3摛' @L0W%qEsUg&VRkB}l.lZ{I\ԟu#qȿYj`L-fM=FJHW2|\c혦I5{f>" b}LR2E,Io00<!l۰?O9Iqf]"H}V˞T:$ s2}j~۳Z=|Tv`8Us2H۴In~R]HEK>ҧde%{W':!X {q?J GGXgF#1pOKr'#M>%PjhWL5WWkXQs4*x,@&þ[᭢ ce\Wj )np1LEf8%$Dzf_>+|LNx$[ʭ3\ p1f44ю9s:M1g)uiZ+ISג[4s}{4dkwmPחYd K?zb g5V68g!DuKY",k)Zʲ,k!. v=gJ82ȗ>P}x_Hvqk}J)R!)CW=<4<8C'%=Uտ:jp9>-7a42$sDȩcG8BU 6Pڪx=պV@MuL7:GGUe #E" m58.*j SEX0-(R[ϥW;PVo}a$؄ A2 c*5( h;℅5-g{E>?JSz`«h}nѵ.l ϮbruC'^UlbiG8|CRKOLC<^ qD<i Bp8rxE"n({E%^r{%Pgpv:R-E9IEdѧPafXTY$$"  Ll/|W ƽ ivdO![a- NU}zai,mu^S0!*;Fh$?w-hE ػXxbiq뙏 ̕ h9C= 0n^AeܿuMg K:#!gW Xs;<"a sxb$ٟf>a`\ej2{=E?_,+SD[ECdL&uc 9`<)"Թb:f4 3ΧUÀP,HzBr HL;j$. qZ>Ü:>3Y?<K` E2Poo\vDHId]{ԁsOWwBj-m~} JM3M?M>܌kqwowlb8b{ۏ݄T?Ӷ(V8.w?'n/aUͤgJnt_5 &^5MYރ : (NAeFx*g%h󬳇Yq/>uK1q1pFL@5L$([ޑ& Q<$jA1}MaM:4LD8O-I=ltl4/ BiBH"qB: ,8P{HqSوg6jְ$Upm (dsk_Hm<A;EhJty>Hb:ދr_&ߙQ)ʿA%wQvWf]-=tʹZ|[1=GWņ$dK{9udY)E%yH*+4)&7GfӣvK$=xcReNypiԏjԁetR˄oޮxuI̓?CnJ;U v7-߫OnJɴ.)-RLd-Ek)EkKtbnuLz$Pl*3,˵R}'zoq/_ӺJ8-wgSڜ2yjH,]˃_ 5:5Bg4qnT.cmz+kخ]Vfؖ=Q[@nh[sVZi[Z=P2YDT,jo 9 s8gRbrSܷ]k؝ְÄ`Q 6etyi 8"%Jh*ZհÒE*L<i @cK)Ps0D%0l\-Hk[vtppfG_j?U43Ah) >l,ZVU;)l;d F TZIn <:+e+|%Θl_Ȗ^G28sf.xh7sߌ@ Ԙ}1̲&ꤗ\qXt5:<f1grH8+j C T3[V2Kg/H $*8F)BSϘ8P=d:FDz4‚S x0 ]`K?n9wC^ןB6󅖷! )೧td>S;Ym r?>[?ٛ~yK,(*[[dgwkه>O$X~~- 0TYԒr*[ޔlel BATPHO!BUh9hK^F6X6C:}W)PaXNJY q3Az#->%,[=O@[8'6*f:xGL AHRVa S띱Vc&yehj4BZ"6~pm~pA%0\?qs1]_7ǁr5jxT~8!,$S{+%R><.} < U̹|cvv|ߺ%;wy{[FJ\z=39Km&GkQr |lGm7 ,3A kr8ҵtWmr"&|/y㰓gjTGku@TRb#G J^B:`^e.D䤪C,hOW$a iU3pbP A|@N@pւ =rl4 [q2+10BB23uS/IN Y$532ꐱ e+i֤]5 i$`_?RMKf8g)2óYoo;DZ5$ c vg`VRjsfȵ1O&zGmtU%n.2[Ys]B (m ^)l 8q^oxvva"!-5ipk-j)w1pk%|heYNZ #z!Uxd!R&R/5eDDL ` Xy$RDkK:|&ɜ促a4~kn!SErwk++18;uvn1`_nM`&PfTet`E刊`) NPB%zV\8"]eqO*ʵFI2I8)FXbd#vyW&lN0!B6)CU~YZ> /ُIGȞG8ӒxgOu>%߻OޠWX3[q7ruVj&ڻInRrnY* +5F0D6/REV9SI ~IO~8t`ol<;M6C,E:OEq1g{tr?s|$Mr|$MQw$ 3%FA8%=2` :Ꝗ3=`1 (~Dߥ빰iJ/``_!{_e|7)SԿ{^8YpBUx%A5֍B}ȮAV3 w mypp lWy9h͌yR0N Y7i󷦖CL LJ!*k,YR=0ᒉ4'|غ>Һ_<am4~D0+Vlh$Lΰsc̱|<|ѾG{ɱ6$ϵ,wJÃ&6:rh&q]nL fɴY2[dMYzSq8cD[!m% s'Ü9Q [cI"mC@DxKPٻ6$W ~n씔a<gY`15E$e[^ȪxIIU$.ٖ%WEddPʍ"6xӑaY-;AVJHH!PτPq U;yXtO0WhLxovnxxUbZ^zr7?,n}H!-Yiɥ/en2ˀ @?}VO6U}j6K97 UKe~kwBc;+o!ų`__"GX(0xn6Z_-"7j9o#q/?i75u̜ɉ"B"enQ5ajpD葩*6]d:.ScSr[C]]m:EJwiZk4RSmIviip/@ƻL()9M`l[{4}$?}1W el)m̍$ӎ}ٻNS`:,5}tF15z4¶ fNCvꬪar,7ejHzDrke})%JM&DU_ֻLQg>Ҕ7`ԡRqԆ|R#2Ի=zW{ю7ehrDl(kmw&Z,[+BΕ-r5hs~5k$bR9c6([8SYw 3.2(X;"˨+/5ABkd58#Z=gF}3țs#s<OE57HV,~PkwmVom<|slL3%0u>w[Sz#K(7*7+ $ ?86"zJ j6psW2pVP슕i0ը5$b:eSKj0EǸί{qՀc` <`s-:2yJ4KFR2 ZG(PHx3jg>L(R-Jy.9J`LguXdڇ6dq|:O%m?j_'|[]{gK0>/tfz?_yt,d=ZelwCA=-g3fX\+Y`-rӈ;p4Ι&gT\yrh41tpOu FBP &9 0YΕ-rfax;+{i mJv5~zw"` dM䃻Wd9 A>HspD5ﯺk_FJa[ZX,_vGEgO> KKB9UaaR"m|BJ>>5cLܘ %ZB)SSNq飶ěU`̙ ײ9utdΝ%\͙d>SW\Qsρ,5iZ3s SJmΌ6>)RDc?7WSlp^]8m-@ 8qp89xʥvf#!Gt7]T9^ͮ>cM9UyDAQd5=)[%|bo'^,@+aR/1",DDꥦ0+CʘtcAz㙻v2Xu9bt k>?ӘGN&`(3*h2:ZݢrD`p'!5rzUtSUXx K7x=$eBV[ B}S\iKdMUN,--_0h6Kwl6 +=-IO !["F_tײzXwWaT|iK|&ͦw7--{_IiI %O~^/ފO]^g Xnys^"^;4Ի%P9i#hzmQ1wlrֈ] r^p$e5uiH$[JQپXi(hIv׆+oӤh턲-m]}ݞhDPPuk4o$ mZ۝&miƏVV^vLH l_Əv{ ΅ô;o<??A٧SH<WbWroZVd<Н_tt|!/Uh<]ldͦےXu%ONN'۝Or$opޛbHٱrLYͮ%..Q9 ̒.妗,Yy$!+(DX,-,xGI4 G. Xn ,5WS}f$ #{vOjf>+N^.F. K eKh\]Xz8P,CLlT҇^Ml;i+7GmM?m6LWm6trg\ས&״HJ*X,<1M5w,Ly!Tm6Wiosؙ{&lC5,^SppA[X)DEU84;::$m͑f:SWrKaݗ7CiО68i7 eu.V]ٍka`W8M Hp-VMg0"B}FGQAQ0Ξ/:!rtXEx[ :z^B(U!ӗf+&Y 0+T9&72 6~%Q O"L.?bljFގ??JgFzVC l>eQ8&..=V(xs?rǷwOe>&Wwne7Oh!6Lr6e3?_~|0EN93XZV8d~xzْMxG2wZyfF &2"D)NdN \*'bٻ޶lW|]@>dҙN7x,KnQN-Z,ʒvgqDxysoUT{{R ?}We M?q!;oT?v+~@ $ЎiƤD H9jᑱZhR0IŌT>o"ti͛u-JnZcB{Krvܹo YD m{,"j]N( k9Aqx{P#. uj\HH [E&G s'/'ZjF ~B DH@I 5F$q0F<ʚ"1=Jk*I&/=2` : ZF3k?h"!U!^HWxa6jAXxA4G|3+N& "hptA`4jM ^UPac' PTsPT@{o%?k/˚OMMpYQXTl ~?6MRw1(Fy,Tͪ(v񖱻z&S`l*0w`~NNj?,|=MͿO۴!}_%!SƢue. #yg)֚*h0+5FZ@ d]8"WBcU%k:eLn_m@e֌#ByNG  vɞGPO~6rs@G58_#(a, p0:!(95rX+7Iv<~l_B+fks'g.ù9K&[L#8HkF9e/?oѶq1`7n_8hu0z"Z\&4aBƹ`ۘcd|?-! h;kH,LKT)Y& ء(v&ZHCb{ 5*Cz ɥ@SJx->cyca| U5' 믿-vZI[9{pa5XIf s6Z-0VTGTRJL[T~*0 a#'µׂ3<HΚd+40p^~{1ܽ*dK5" `3q! L R@"2Mq_+RXyWPYKhRk&<{^u3RsG H{w;~XeA#AL%8hKPz Va`3X`/GwM뭳7o^gc6y8<˿Za~qlxwS @޹Y٩|G7+C0!B*Caj3jX$"﵌FMFSCTGaN|%{P ntyyk=^e-M4[*ȂJ2 m&$_ MxG͒siH71- rЦ~itٽIhn3ǭ/`TkUMlzw]H5Js-jlͮRO7[W\s>G5UW7s|zΏqϙgOL&=uʎ6fխ'$[ӱ-i9vFES)?UYWB8N b!9i=b |QXIR>qVpG4VГKV)7_k)sʣtb[ gM)*D@8kI{m VQUl E^jzGDmHXG98o)1) 8F.m 4RCڂǵna`8z]tSZZ4|gMVlKON0!c.JC oY!1>o9$yM 4}V+ב 0o)itL*"Q* J3K`wɚ#5J^lY@zTEl$uA}989KK|{\Ƃ+SÆF߀:ǟ>??Ͽt+L;: L 2Ћ!~kj,u4tHI8s[S~U}ځVLI'WZ?!C端Af졮lA ѶRB.mR W6.XMݝ/\ < T.G s9>ˀ1r5žNX&=Z3XT0ca>K1q1ӁjHQvIWG*O*$^'p:4"D< tJ*YpSFUD3D5jٰD q7 $T Iocڵ0ҚNFeZ&4ZD+vON)蹠c櫃fyav襛^8oriX`pYBQ@ NEdT mGaŹ쌷-+Ihʧ)j_e_ ^*CsY |7Gof,GPe6E?(<\▱ٵ-$:< d1tfY$XLjAuoTeo΁$*8F)BSϘ8P=l:FDz4"̤$ߓ{Q^ s9&}p5CϋgsO[m;w$(!ԦOZSȳZa.d )X&&#O;ͨ?6 <p=.qWStW^U҅C[-ӎ? >]ӼE&z̒OH*vA1lK k.(qCvYPiI(J1,,VJ\zZ}g4nwwn:U;{k&GkQ* i0 N9ťoW12#XP1g&bk%8tֵuQORiaNrzqK1wŸސ`e'hd0'ZK(I~fh/#Et$LEt!"'U>`!!@e_E.qVqB0 A1 (' ֆD{iLTblWdcZadPg($^2!YobA% ʹ$53V4ac E:K,퐖vvhL(y(&x#kYz"|EmJ3 v&WIPH!@fhicVʛaLvj";u7[|s=@s% j qܟS~`! -$PwVRn {LZ<`sr}nnY/c'.#qy?._~A7anyO^Oz-LKȗW?{V ^ޙuɺ7lwDLĸ0:jkZZ7:@GpatT̬̪+{4pZz.S j]Qm$"*NYI%/\PR˸^$8 *i:d}4 qļ!xg%{n Pi"P~Sg)&:N^+g˭:qnX=yG* ;nL nNxpehy8))sT:GIAýf|hktI -%4D$0δ4{tr,J٩s==7gs6qwStbfF>{&Uʫ}28+]'qau1`׉Q:D %T v44AC~#wM /FĿ-͞%;o&4G3w!ՑBS;L̩K o ֓GKOFf4*)5khu:% Rpc)noK3njn_r}vlXח:43_x`uOm޷`/ޞ\=9ڀ=(2g 2j <$`aŵWs\-PV9׿m4fsb=& T&R$v]9C"e`TS5rLv]rtF |IHN2Q򚌏"i?  2O}"\k҉"w!ǭX-<ψVjBHΑ+SZcƠSbTrKP ;ѵ8ýٍ˫5r&9~ٸY;:?~~otYl@c },.$LP EMb`"|L+݅ X(3Ұ$iAT\L)+yTL!/YdK\kQ BH^l'9X{F:f4%D*@5 $( Xʊ*2P* >g4tc-EӊMۥ=i ہRRC5&jmT$<7 \>&}[[Ns/Ш ]˾mBnU}|CFY!yp0CS> ,x|)R 0U8ܜ.XvH2Pa~k|X$:]2Q1q1)UC8)ߩ{4q!ߖ8dKhM,(%,dZZhKHB:] wa. Ņ0{)zS F!.zY wQ]40{fw[ wa.ޅٻ0{f] w w wa.ޝ­.P6%Y >D9ySP+4EvsOuwIo5{D6n`ғek3o%oC J {a/IxB~h3?Ekr1^_a v'ADjιYRghKb߹)HO=y ҲwF_xx"vπ[)B{b[<=yZV>ٛV'` tY;y'{Qݢ 1Y^y0?`] qoj]ئYh(bf c݋a +PojG.CMɛPe oɀV;aUy H?'Fӱ0Mtg_OWfpN'H^|qn\VmV㪱wӌxb:y#V-8^~w:|H^6`#Zxؤ2,x1Y3 ]\e,kħBt,x#k5W;XM~w:ۄ:sGhyp̭{=EYFQ gL:3.TൌRZ!jH: BPƹIB8LЃO5Zks 1%܀Κ>7` =.$UѧNw uݬ [n -_K}_M H_ԗJ:j"%Z*~ |]A/0]$A(FCB%{@@0hߘ i\۩3~=7NsFP=,OSf'욢'EwG^tȕM_:έ/lv7Uϧ㭾7NOwxpmyڱ/v*..NK{rUBb߮n":$x&y ZpP$N))13MGA(yfEjiD,abew  *s$,9e(^ԩ(6P"uHlo`Bl">͜_qC-Phz0jab4H>m cÈz&{`_CYFǗyfT2:2}{JiY%s:4QV ~3V?]}y=oz#NA#|rm1TeO ېA#rި"ZՀ(LkX?L.b"ԫD]nj~ч#ж/ fce Q E-SyJ%YB?ǩ3juc`EV`<hͅWlݬѧ/Ö=o;T`q!xKJ%X+,(̧(٘ JP#SKX7c!".>rTф("R* Wdhi4 a$r!ԣB?L&~e觓pߣGa59oͰZtδZ|[IJ+ FG̈́dϬ8 &Ȳ|TkY&:h kZC>##݇R>zC@8rHt~EFa4"8܈G ]߰V9YqS`T'\{:Ӱ/%t~Dv>'q]aL/gRV'(U^\<;=<ქYuo]88:XE˕Zϵ5Ӓ;0π~A5Ν{?8&Fo Ew_Kj>@m,:oI)Y@ejklaìX/qx敽(NRjgc-)sCjCMT\yie z/tp-v06-11aͨF.`C.%G r 6`!?G;|sy mqXO16HFg[ ӑz}9gsɻ*=WaQX6~i84>.tm)W6FR:6Y0DT~@5,: Cai)<1 }9u1kN 008g6iAcP&Um^VvBCŮG7G-u,:mj߾ە2}inmu$fx\ߖI@z6X6\[Xg7y*f3 6"*6@$0#֏^W,Ut;~>gn㦌Nm.h g&(Z*ERp#`#Nn |DZw0S~h|&3,v7mϘ2M}{N;vE PXE$Vܑ®~Z:Rw''wV:;N E_v1xTr:տ|OwMx&bf\7)+㪍zqMIёq]6]K{'sJL"g7}.\} div>c%3˵dzg"Y{$ՒekBIebWkOMמ򵧕t mRS[̆Եw61g˅<]8k@֓{\G19Fǂ;8T! ghMpٱ 1 '7㓋M'i$:xuEn|yP5]]L_MsƖ:^_='yzEONA{$?*Fly$4ז-BfRЂZPB JhA -(%@PPB JhZPB JhA -(%ЂZPB JhA -(%ЂZPB JhA -(VW[]yoG*@F<9M$XxxЧD"e[^w9H#qHivtWU!K%t:d YB,C!K1bGp/Ӭ0o_$djJҨsO2=pk^ccbt%ft@S]S&曬8jjWyg:մ[iKQ-M+Ye8|=Y$)Zf6)*udoi"/Affn7>' (n|S-sTKd}+}[>;˸v}G.vS"}vYH=a5ηN?}$BzBUIOE~E156VGU4CZZUrN.숄Yv-shڽm{eݝ+:r;=25O-߹ 暶WwtMAǾ-HyiWBncg.&>Lr Ɋ '2+?PUMUײ1p𠄋`rFϕw,FH|MMŒTV3.W&"[j8B7Gk6>ZL\y^eTVDW܀ ~ؠmFH һkưN^c}cXg G=JJXIzW/+nV&\(wp)BSϘ8P=x:FY^1p*5ѷOǣSɧ{ad\_/ݍK9KdC6PUKg@W>'Yv۰ȥ䶭<}SY+=,،lZoAQ.eaԎb~\B4՟'e]\=U[[ ^Ѯ*RiцNʹ)&L [q9̅V?넥hwlO=Q& u&MxYiUf# )gxyJ0c$ϋh]y78FO\<.V"Ҿ5y:`7ΏmYa!}p/^î@>80$5Izs!,$S{+%R:n=^4]ý"Y>s>7^JF({J_1 N9ťoW12#XP1g&kp#wqZ$)U;YTwME߻d@hdQkS T 'H 8kCJ{iLv2cǴ Drg($^2#YobA% ʹuKj$f.y4c-FˁN8퐜9Lc4ȋ0y0Nse۷gt+[&.b{X̵Q*g er9S]+3JRNO1>HuSTjT݁9 Nըb-09 YÈކ-p Jx{ze~3'J[J}:*a1d˼AtSߝʸKxZ^z}^_\{ 69^ \s`tEV8!3fƏ%0-,V;cƉ1V; sֺ`kS{{5HF,0U HAP`"RSFDD b FрG!eLDgtoaf|}%ם^FIew4^;Vs󹹙_VmVG-OJ)!D, [Ŕ`8 'HPeR9HsC$tԃԾa`S${:mqFy1u|rIʭIoӗa%zE;>-@B}|^qoQ}wHmM}ms7m`[^PSs7F[nwd7]tm|7{L^)EE&DZ2Ps]~޼?vVFowOU7Mpܟ|57oR:hzx&!O7sO =25ex%{e 1ՄDF1b#lgaOAXhjsZbl-1Zz{Mn%B%dBs ~1Zj0h}$xP0=3N#dLNq &$"a8Nb=zEE9DŽRn&$ ܚ "erbrX+"D4}Etv_QKWĩx}c=P}*n>s$n`&\2/AAfQ=iak2g5-C03thPyq/|tK{`>5r=ˈQXKp΢ǹb"Z#vwTKT>TK٥|:u"uH/v=CxEzWTq26=xmL s.|PF(6'Q[JlB Oc*5( :נ} kYt~gߩliȥ򕭬-Ck ؝c tqmٖeUo]t%1+hy Ex(! :H:|M@mp|C:R-E9.]ʲl̰4hZI%pD0A  حC?=K½4#CکStt!41"3k>Dat@? 7P.dpaǭg>&,0W2+_NG p#A8  #=Ƒ=sN8ʿE?lj'wʆőO13{3B$\2 &Eg}b$_cm}L3ƅe}K4kxozMӣ}֔d+h V $0E\D1u'(•ߞsW'aYS  ʊ.9:5fFJ EXN‚yi \'* y]bu~|],1fצ$*~VWJG9R-F) lj ],qg%!avRנyuV5OU4zC໛eUpa2W~d̗$GR\Cyi2}{i&ɷ tHom:LmUbydZʨTnXd|,tst5oy9BXmyɶQ Db"]iGәK] bN Φ՝ǚ+;kKoUgI8sK ǿwo]w߽{:o:܏ڦ S^P疡71RjY#R!wp) 4+bR6JH ɐ|9U¥WwѲ ѾVY B5tW>_eܿ +_q@.\/{ƮꬒTGh,"9w,nJݺ&uHYƐ(JCR%y0t@wKcl^_Է &9_qV(Ց c a:E;b`VE(U0Vf> g۩Ld\N 9 &%!'U 3tA9uYi®_Cʪ>JVf2OZ褁 c1T2JYd(t ,4cC*PgUښxaV8XE=|0lK$^2iq<ؿF璔^! ,C#+:(rS ʾ3#S)NNn΂2b(: 2MП]҈A@?9+xTTHhU HCw4!/ϖ.9:wt+'CWhlҲ^$v> bbr˭1֮ैΕ+]ѱd:8t!'PAlj,ʊqJdZ4(M\'EHYgAKKI,GnS18-,Fk29rYBWՆB7`9OGjD 7مݩ[`!Xp+w}4;^Z> ONNAnڂǧu.gh/SiJnѺ"?~bܮwmz|߸Gͪ絖t2n>cOys;ZvҗE’~$=kO,nit3* _??Byv(.|RRwԕ~˾O-D,0]7U(}zT@c|iU|ERn2dV<]W73gh^/+Ui\d񄋞ӕ8=˶1fI#iחw>y6 @~JK:͙0S*6pirT}d@ 'Ѓ& bgM4 dϭ7ٱ۔(m͉K̿!MFi,S\˥e,pv񳾮訜$ywC 0ym*Q›y"8 %oK>U()H4(#]& Rk|5vd~'SO4i{G<zwsY ?^q61Ma3+:cC,+g-鵈RفӶ FAx47av &pݏhE\:)DκO0 ޽)xp!}1^wX@{u7ŸPך}6u"%d|Yq}1V[>Ƚuv@)^%K(jgLyϲgpJ˒cmΒ&e4VЙ 12:|([m8{&L t!%vzm9Ǐ?pUS30@4}ZY;r=*kEkVh1&H́\;6$Yh0ZB)UNN_*Sju%1 ͞A`N# Z:]rpvK8 q>Cyuw.Vֶ]O^~E,OYxc.G>ގHa+FM(T3VjoZU#z~$f"NIW.( ۋ٣Cd(>x5_ʗG&t2s_뢋ݮow;_"~]UptKYڽ~ϯ09 ̚.kr,`)9I'dS@3[6#>^ɄJwьEn5#*_pD&ǐ7|B2`6<`nxMB őbkw,Z~h֫)tנeRk{԰3YqEwA7$|NIml7Mܖ|K*\PmhJ3:;*t!,-Iީxχm5, hCZfgLв彃3je%*eX& 5-q~nE+s}^;uҚ.Nz)Z6P. EډᄷIw xiPzd,0${Ǽ I3@bAJ>h*iAp.>XV`ͧW6wF8> rg$soE9;a몳JjPMOP.gId ͧ8wY$сdFAS p࿍9vrCVrKTxVYgcb8G +3`J:2*vxyYl΍h^ҴX}?9VwQv@c<^X:xl <!91Xeɥ|9`l@@vJ !,mUR\H\֥ȲR \gȤU,p-",7Om̵ g>ئz"i/>LW'|6s裁=2`ALn/j\RwfU!|cF~񷿿fu~3*̍[oIykZ^\q~5G`m_>t@#iʣHkʌsuԻ)G*O'_&LL4a:ըQqFB&5<E^=2%BtZNs{!_8XTq78g{ QGq78g{lo퍳 7QBloNlo퍳q78 3*[In\|\\@ 7 ,yK= 7W:V]NN3ۈ_EbN@r3T{9 4-\ nܒ/+OS櫝ryОV^663g|ym͉JМzNN\JZe =rRyo9@"ɜ ?w;+U^/t5хw'L}~;`DS+a[V{if(gUx1&' <-i#W0Xǃf좗/a>Ź4Ϻ=A켯oƃ],#ByRUchr{V6{5b̒ 021[/KsgR\KǹI [1MQvkٚ8\7nVJeӓhйh}t37HDFX࣋IE/Hm!aɄ,&:Y,26=Ьq@0)A&C Z$A%! Ǭ" lq=bi2d ɘEΘ6@5)fmD.I'͍LYF:!26i$IZCJEKb)ÂhWQ;<몲v})_^v[vqzGíptIv qKb'QX\Ͽz{Bl+%j,|g/JEJi`%.#0#YH S .R.dŔȠeUv6:,0#M)h]XiC,ǔ⁦3kknFe/Id$/&Je]*\%S$-R13Ґ8) lE.nk4 Ng пh~.]. wV\ș%bl!GX d\|mS |i~Ϯw`9"h\?QÞ m ,d/ו0R1fɹ/" C+Z,'Mեk2oN`t17;="0xofMTʸVUߓNc\h7VE[wJ ujD$E*Ax3(EsawI{/.s2ӌI#D pl V QuF/9*Tn/p\.7&W-N{wq|i^[<u̍$iZiɜYrP,hd}RwVLBU.չI٩[ntrO.ohx#SB]3Ց>7ȧ'wNɽYΜY'zּ0̈́PSC\ɹǹ`1X9c{^K.# dRKJA1ֺ"_?ݘbAD\D\ҳ5\>E_'5lYJ7Oa$E{IRqiԬ?(O9-J@Tli%%Y3@-Ė{=>2&;<3N#‚i]Zf9%s4HS1G Eέ°,¼A[R.k%%RHDc;k}YlvieǛF{tFfZ nls߽f'Csrk{і\;9$wO6G㺼vf6t4D{ XNq,z̝vX-"ro̸)av=xϹ;aǢ|J"kBZ%1[}:4z C>qt72 [9R2)Iǎ56 KK).YTLZ+8v#DM.N/2o,kc IEp+*D@8 q VQ 82z E^+RF(np陞gzǡ\eXnh`$Xc] qD<i 6UqDPd|kvLW#`RTD2'T|ɨ47Sbp0`AP,@2>q=4Հà9t6M_H=JG"A,XHO`8 }]6Ϧߕ@0Ь!7 R:$UX(>ki;8;MZ|Ǚt~|aAک;&FJ ӟd|T!%ђa. ނ*?")Fg] `8%1Rѧ~=m3v?'?V\MΛ.;ñ\/AkR=FIhx{n35C&^SrWMmՐjm,*0Dj &)|8_O=ٻMVtZ+Akvֶb(:N10T1ҥ0"#\ bf ?h9E¥0khA+ "tcѨJe[r-(C<  OѸdehX#mso_l N !Tpy׫;Pn -0md6PΨG!=Rr'5uҌG#Ɨ ˤ'P˼w # f 7a)&.2Dt &z٩{ݑLXVvHy# GND"ŒJ<)YP)+4)#ʎ*";q;k(" SFc/Pvrf L&g긌Sinso ΣfC|y5AP}=Jc*$sdp\]dtO 历!̄LЮY 7#360ޟ ޣa_=uEa:ne1ÄX/zYT' ğ;BHJ1 br)[,S'4V!)"@DAI34KLJm'Xr MY&~} a16o_fڙA"NVTsj8?Δ"sVaJTy4:fW &oףpL}Cf?*lt{ЧIOB&>]Tw@ӡw @訃j/&:p`Z GFgҴG;Sz:~{RT)$褨 @(o<'Їo&Uż*=Oo]p|>m. UM(ŷ;Yhg?{*eD0%;V Fͫb!RR[4wF z1BPCՄ;)8A8򔶶9m!]3S5YdzEj, j|3j/nԿ: e^O^l&_%cgw=_3o->\Hƺ䚄ŋdCtja.mpINz? wcA*)Z?Da!"x`e\HmJa/ 50Ai.]:D%A=;O0hI÷!Hd|!ӰeLÖi2 [a{}Rta*ӰeLÖi2 [a{Їb̕"_dL|/2E./rnL|/gL|/2E&"_dL|/2E&"_d._dL|/2E&֞"\ /rfL|/2E&"_dd09L|e&"_dL|/2E&"_d⋽\}5IDu7ϝ8zWw}yz<ɬRNq,z̝vX-"ro0CJf/hw;gZIujuڝYiGnw4F=/(0` 0$k4jUE_dƣ煹K+Hk{kl|oo!nL/ Qg+^D=+5A*P~NZflxUB/ພ׃ʱ9PD!PB"i  2O}rНZY[D>5&w~{y7卟y7 έJ:YxU !9Gs3P+DwTHƱ`c)U1H*TM?| VLy']{JJ>^m&ivsϘɷ&?{ˠ{ rOb60cee.$*O%DD%tʾ! KB*`1 PB$^ȴhkcE IGem {!X'coHƌFwN*985I*ʃ 5DQ{rIŭPBZ;I$SJګĖWAb|wg 0rN[ՀW- W1RgCŢP™zo]xRA^,\IPR9f !cDd)ꤩLsY>,&i}DR(ғdJkvW'ݵ7 "Źa :Y;4SƠL'D\ D3ɰl*J@嗤`ԎS߻).Q`2U@RrR{oG8g@<$ϩQVN.HqexY3*m-?o ϳRV>j(sy~ I*c\ԶrhڰyH6.OZyӮ#+_.h)TQt$JZ(G9]j&M0£"G[U6:k֔z.hCL$]T zgLD<%B:J"g$5tID Ӽ G ,7l)㾟TMxւ _>b Y軌c΄T ,ƫ?\ukѬT-g0E#I8:38 H QEhktI -%4G`iimcQʢ*H\@D  L(IaMA,Y֗Quxta!EÇғÇj''= kذPkE:W2<1̈́)Vn^탗輦.XpVVG#6zsIpQ2s*9сS}~6:RQ9 ZrH0:z.opeRb%l#~ib3jjnI+9]߅!~ٿ2tl!3߯V$ZRUv fږzgz> }{=)b#P!\i5V";4&s*426*6T[eŧ^=+8Nkg{h솙c0_Z2+jOm΄|goPJmʵMrAꌭH TՕAT$jN1rB#'9*'G$f9ǘ8-fhu 9A\HA`6TPs<p) bTmAL&ʌdXGhk1eg"Y_ݍG_]:hî;siVD}fґKm˅UD?BcXPr#PltD~׬MeNgw#'Mmwn=Olnɬ][%]M'7)7'KyN?N@?Ӓ[KU~ 79X~]K4c-|bkn|$trrYN27Hf'`O07wGǷMY/+dS'cdm;(g lx]ukBz݅ڗ {(C`jDr=*7[=Ucc946=Hx_?G٣y86c=Ȫ4ZJ0>~qK!q(Qob.a^PÇqploFX>ۓ)*C~Ǔ<> ܻ0>7_MDCUkqĚW-=R6m-mt?^K~Ez~O{|w j`/ľ{ּ:j0Ҝ}0Mk=!j:cDQ.DQܷ\)v2)l@.:AsT#P%Z3+a`jS$ulx~8YK^lqzuSfޛ9/\;ZsbqEbdoqȄƇ 7UkOOOC}BpK?ߎ"yrt ߓq<\^p~ǣwRp_f[dqO<-JB TG齹BiUtZG+Մ</Żt4^>Q; XJ%C{̀"Kx{g}nK߬93\k={/R/n"m[ʵjTkBqfdJm:hz@|+[^* (->1iidTq/ߨ<5oRX"F~{<^֑Spges&tTj9*I.%R$-ZxK뺭@s3[=5jgnl>yxuN.XRnH M2:9t"XYP NPK* #@0ZH\}N#'8Eq  &u.085b C#mM8HA>_FjbD%U1C&ijb Ja(@ J@mrpk1Jmnˍ\Fϟs86^JIR%A(% K""TpHJDFD 6ADv̡$)Pqc"7#LKsÝg)(x$jH׎ aGSdf0?볣ZLB1B T% Wyp`,%\*0%! AI(UېVɛ,(c\~FtL=&D.DE aǬT#*v|wꐉVzʼ8%UC.w%Bjx9T>QʃQJl%fTH1)!(9єxK6IhdJ:Z#H5(h]RZ`ʈ)ֹ5,% !.'TСMP ꇒH40[eA !) 9x&P "REbP筍A51< 0 bY.&Ø(41Z@gJZ` X[i a+XAe@8JVr 2 1 J8 92j\`5p#Y </кzM ~*BuJbcM]Jx5{PP1mIyT0 ((F( m셈  %zTYD++o7 e;n1:3[lj4ȀgZm E][YdbVCq|1z(]6,Vhpj[!s#2Lq ߹a M\mZNP7^k.AE|DB&(ƶ6:d-D&=xHu@KyPꪤ`שX/b$PQ4j7V/9~=)x@iA2QӒDQ`˜0ҹ~c zOy o0|{@Vz!,ЭuQx# nAe6&CƂՙA,TG,G˳"˒mL1J%wlTѤg:5(fT@aBAmM R1&(.L63hI^,Ptt-;#Zo 0c; C DtIȨL()*cۺB:b8s4F)xf}mQ6H2%YZUzK,#aD5IԿAn$LL6 ,K· +Z%da0RN6̃+ Fɿ.EM3|i- z uT S˅[QW8neʭw.V'gu/oP]& [CJ"8B,xT%#.b4CW=U4~ p*a$119Q d?R@*pAP-K%GcȦL@S6h t-7mjѢeK*:^'BÛJJ!ArI-H L)#o,0nKinAHwx=JB4S"qd%q\;((e҃O>D\K&"6A@U[# !.P,1tQ-0 UDy+p]-+z'!8m5 ^cՀYm^0-zxjD vheS~dI72jYVF-45t{k O~E_hI^kz@0׼%+_I @Hΐ@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H v@Jv&% ((`ygʭH @ B@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H v@9RnRUY|JQ|?Dv#D|:?i-o Dpon%wyRz]xo~NN,#y[=nc\?Yoip&LnN'ˁY\\Lx>JYԼ 18ķּ|{&+[>G[U;#?IOR'?IOR'?IOR'?IOR'?IOR'?IORͪCk_ī\\k\Wkŋ?Z)-iߠ$ipfգדG6!VkrrRM {T<_Bި6¿m&Nj/dey}9Gl lyN{ԍfX@= YWZ*("][/^b`PngSn꯿P >E+fsv$Ȍ/u5{d[5\#֓DP?Xz]+ʈۏ=l˶BWݸYL\.Yռ^"~sCIrgg_݂3 Cq(PuغYŹQW?hz̜#x/l(ޫe< [߷櫖Wr~3~&vZz\78[ O ]zLN֋~f^B@]cv4F:@Czy ? ?eɋ&|i vx-/nؙOMvY]vk~y/qq Y9mS9aݗ{_ +v0G[C5`r ͋!z_7S>snx ~ӻ=nݺoS"۬CM?<> : N>|9qo=II|ì)B{+-_Yz\kSK]F= nqa9P"՘^]txǼ,8e}q(j{,$ՔIV7iuMS{̟ئf/<}bVc'wMw5Ok8OMq&/ ꪻ{⎛t]ݱ՘/λ[8|ѺŃ}gXUCB_E_[2>9:\xӇP^/O- 6Wy =(Ɨ"79^< Nd/Rڗˆ(0 ÌAxocB,`yqyzE8O˽I+ v~zǡ"[;om;;5i{؝64 lzEa!^!tXDo_OJ)xk?6E̍j3,.'e-H!4gHR㌒M{],L[lfǜpQz/i:Qu-xװ{"}(iq6wg<mORCuG!s]hj5BOC?G wy= ostv5S?{a|7Ox<88[NTtSF;tRJ \O6'yϻ3损`\E#'Su:~2:pkE#AoXAG#%j&?J'j#MSt6I2RqDs&׶Ehu+el4RKau!zN69wf?tW? mc[ a;ЍcgA>},s}J{zWEF6V܈j#WfAS5e5x}O@/X0D^x'V-xB̜*}k󮵹$XXӂ%BR&Y"ocI=(`Sicy[{A%ؠbKF={9o73+*^}}[Lˍ;K-Pn)|j/St,y&pA+)/jܤ> `CqO4/۔ޮv>_qK"CdUd<;8RFcZc2h)0D84t@%L.UH2EX1Ay psG: A1)sm?,l۬EQOr×'y1]uMFsdZ >~u}>k0^rYӭ=_y} &?vd]b uq c`9{R"y> $y{뿭fKQOv|*y^vM1 K(:=zQ.m?lF给O9hwd5Jٿ_gG/5\Ý-#I+; c8dyq\gqO0E*$e[>~3DRI8?~?~2_}:Fpں ]m]N疦 fF_?z?_}a~O|>q=ꮷ$Nȭ*o"żGa`q]4LIϪhFxQװ[0iJ~4آ]%JrK[mWV!1@ۺOy7!g,k}]bm-éjR,uYw`ǶKgm \xeؿ.֏_vXbARiep !J)) `6&&2-ȌU~ud ZԿ Ӊ*4<M""(Z[ zEH{D 8pH @8~FVIB 6lBibs⑧ ,{pVߣ5 G|˿{֨1W`.8VFp[NVAI0Y"J #^R! fXV頹6TPiZ8'G&))|X})uo={e\AA"hiGk ƒ.H=jjIC?gg ܤʹ~>g0r=iݜo >:y Y>oikxɂPulx O-C/4S(v:X})/ER-FI&"ME `P;9:?(iYc pt mQxb1J&ijb  :g(`X"!(O'c0,֝-p:Z6V^;z Ά_l2j-lKpoc\n' xff7T=W=uθҍ<+\۵3?m'O㭧!US v̭yy.Vnx|[慒x4ZSfټbΏi}ͣ*6^4fT{&_x5{7Ss3*nKϿug!苞[7g*V7j oϗ2p҅U+[UY+d%xJRC4ąI߇^Ovi98Ky'kO6) :$( W,4&(:RjiTGDed TkՎFx "XXy+`$Sa'XwI%[qq~97G[|\xjȾA %`YStxXxpCWg`~lYߢCs,ƾWϷ\_{;pp(xg;uV{/#O\nܭ ;{{h^.XqrQL=˭&,g^ 52{}_}7qx@ 꿏]{I2pl/DjfX]{hj0v}R:ҸTίpC.j<4{lŲߣC=zXvBj|s?Qxz Fݞfi^bF^ ǠK *9?@8݉o?#l6M!񺅤BHtȀrFX&9ଈL3b'AK !EdO (D76H@I 1jM8h7-&f7y~xr4e/GkJBtF{ؠm6IC6 AAҺ;~V.$t$-eH"@wX8^ Fqha^I2r ՃGbS'~sQ'D\ D3ɰl*J)Crun/]ƔC(0S* LGE )97#3 rшu(+u'c'$ct|GM{݋#^ ?JhW_ q6-#=)ReVN[j%ŗkFEУBЉ'Иwq>S'Ǣ,K$.}b"FˁJq&Τ& Q`s,PK3fp:q<:1ǂ]U~1- 7y Q;/i&7Q l/3yM]:ɱVG#6zsIpQ2}&-=?n!:Cǖn",ocnB@3U6 ٰ-b-,|LyiC-]j8v2Z.EJO6E+0EA&⮅s8 7FF娆^̹20ʤy r;Ybf/=i>eyI9̋FwAe D|:{:;% 8-.):p^ %&'4A"}B5tTP$HBS(t"8JugƳqMx$'L x6Dޙ#}8NL+WKR9ߥ!唢a*$X1$R*&.-ki|zWK[[#O'~z7*; '^;Q5O=`FsT^3A%DD02WXq;=Yg$aIHeӂl4{ SKm- !^.$GL%$΂X'c=#39@\K;=;]:c+eebZ5'@H~}BEIL2cO(H 31)qt)wz s>f>,K2lTh+x1q]6 &eF2 ,Z#ՁHO!źeTY_kkQxs9uk/one@*{5=o,qr?hcGqĂrZgK&*9ZhSRW|ӎa(e*VoM>G1ڈ:FG#$K3E$Ę[-Tw6K8x46{OV^vM;c_-i!LrTqryX-8sp9#gV8X BR:ItǺ_OΏoO`tv έЌ(xzzC\ow ?=דћOA߭gw9["_~.B(O~[ijN<| z:$fotI UgZ42+ g2"\] ]-,rKn5k0$Н\B.kI= f'ϺzsdB6+ z./)*av)ԩҰ6w @' Y:.*F5OqiSr+KOng!k|sR;^w;TW8Pw =waf>hfN$kbɆ:M`mX^Ь4c Ь齁2,Y9*eLpd8kYu;e҂Ν(N:XAzl-<15v'N VB t|9HBQ ꝫNBekͬ] jRut HGQymZgw*8cqwR Reh7HuCӿ6%+ yɂ7bZ }u" :]ԂÈ(Pot aU$7FEpe WVCJWt Aznl $ TR2AB,<#:j>1ϔT`źylU =WZm([r0F0pq4+}.ONR˝WtI:^)i)JwN @zIkib׬ 3{b xюkmH_e6y~T퍍srA`B?emygHzHCFqGwMuUV[ E$+TT'c 1ı*HN2.4Q80U K\V)c$pP8Njb(Cd  )e 9K4XB¹XLIϗ8"%lhۋk 5wJS٘xJ9\aXhq5LA.qr+W+B=X';\*2⟫I^x¢A1T*@G`8DCL:HQ:߈/ǚQ$KC.zB;N%'XɠNE9)R+ʞ2b;~,n~5l%GkA;C\^+"~WG-IS9?燲2O?5qgNPBEifn#X\s&8׶}ȋSSs8 ~rro$Z@J8j=!Ytp 'JɥRGբ(KQe)gy7!Z;/S[ ϧWYÃӃK%J1i/v9m槉}F>QOJUcq&OyY-QySxq~ir<8Asb;TK- ~=?^~sqi4Na7u3/,QC +\3pп2z|gwv12rwӏl[$u`8ǩ?#>JE^?KX(z]w4*Q& ۅ~Bqۇw^|uG飗y u8>G4 $z6t}˘M#3_*_ϿWG=.AՖD 426d0})G;kƅ4Fx[ A.L5, &Mc1%UOn0'$T&D#xhZ~wsȼqJl]!:fpTPj4>m ?j)1$xbD Zsa?7.֏ӻH:@wTZ4\Bq|"JLd[=Ω{Ց%,+UM+$L[~D4!Dkm!-xu"187mB U [&ӄeЭ/3o9 [m&ފ35A5 'dQVZJ 5+IJR͵zM582!<ˬ(j=N(_D"(RAK֞?ZhD8tQDݣ.Xjv|07.sG8˖>:nb]o >:)\O2|efPƙ?B%;Y:W|Ϗ< YKƓb%k'!*QH磸Z?&j%jMq>jPiK(^TI‘KG_Y5g^!gßBT{5{6_x[ y~h+nɣW_3<7<8Uab.TZ*'w~5[V`\Cxۢx^dyz.+|Jcq1<H= r\\>ntyo7ok%2CJ"%Z*uW,ݺSU&-{" BRF+Jw1)TF5ǙԶ mR$v]9C ``T<j[#TɖYr7?<8]T uq;(b:x:j m_)UUgL 7=<=ڜ^ Ӳ_#Nfmqvzd>e,~eo捤 y;]m<0hozwq/!󈶗 D \f5;Bwf9ˆ5']ve5[N|̳W=n)qCKtcڪo^ղ7]ޖ_NOdhIV1N /eoUi)I :'}Nz%rȱ8}'kO6) 9,u`IP>X 2ijMPu¥Ҩͯae&PV; Tx㩎F&잋` `A牖4$O-;魑L.oIN8=8n VCG5 cԠ$t0 jWtxuCxU.z]-' 4\o)0ŴA TtJй>@P~-߉yPĘ-ډ1-.[Hy/DMDğLsYf$O@:CZŝ$,PB76H WƁ>PBZA= R-Y;NҬQᚒ4k6htٴVۈc&B"(JY{9oBNSGSpl)=j!pz;b,ATP/DM8a^IehʇoAD>+vd}>Lz:%b iM`TY4-R~L*Wۚr&`Yh!%'xsDP#Netltl3] ړ^[h-hkvŠn9pCˁp_RhW%Q1IJc.(]Զth^ė|_K|G+}AAC :xA?A%-#TˈL`GEΏU.09K^/E1֢ tG4Tr'zmیg8>zcz6Tkء͠#E"CGW֫\ (&kqZ)!Z^:l))a ":papDB5"De!ēJAZJh;8pS'Ǣl7*H\@D  L(IaMA,Yַ5r#%`y((|i8/1Ro]m *03N\={v4ch ,Q/3yM]:VG#6zsIpQ2 b)/mK5e߉'lE!%hKR:x(%dS.溒lɄG $/9y-qN8]7끖4wYβwr.5h-y%\l udskK< ܿNsT4y==w'J 2N^ h\@$1'AJ3Lx(AHs #UtTR$H 2O}z\k҉"ڷF9U4<厮g݊zmΧRh,1jwˬ0H)83F)sP.hs`r/wA|h2JR9$!2THƱ`c)U1H*/Hm[#gmȏُ%Ǖ\gyvK_X̯(m?=~j5XV j) pg p'!(fJmc"^6k ;tvGmhXRYﴠ.N<*T,2%άxD mCn,u:cF*jCIRQ GEBkRţ+ŭs_:ְiZiSvZ֠-ډ%+rWTA7 kء7$DerJ[vfϖFe*ߞ`٪B1W\ b2mTڙ\%bDWsp>b `LDd:,^Ρ I_p:` "?$/?ş^>+FóрD';`q$a_c|-lf~+ذFmLߋZf-;SaYg:TH&+$'1W\Mv\!@嶛L'ԙcRFbJKBw\erΘLzsԪ3WO\iḁGMW/BK}wp.4:s1ʊAϨdHolZd0Y_ tƄZ?[9[.g{jyAxʞȪ,]AA9lt^ 5 &7-J-3m{_k% p!y/%yIOkh.g:/+4ĝQdrז,`8>,qypo6Jr'ADJι+YRGZHC?T:4{ks]pxm5fJl v?@vv#wm%CۼlH d16F;_-ɖe$˔%=X&Uůb}ル20>"tsVVVirqႬdVVIlhܜj+-{,GKMQ5*YBnfa <=˷z7X d_g `Ȟ©v`mΒ<}H58n).\e$RqOe %G2+A̅He V>]M K׌ x0fo3!>vkύcBw{s1gb-S/,ZH^$x.EJwgfh0ZB)UNN_Ղ*SΖDu%1 ͞A`N# Z:,^Uj⬷gq;htx}}hr굇u)w>SdiWzdnkMvP8Ƒxpqkjs*7G f0:?/q fC3_>㛋x2џLʗ>t A߾΂}1 .j y7V_.ې{N RTXJBL|W fvA(h_`7e d@g:jRղ,0mB8,\dB`bZtfflfvv20{ܟa.Eֽvٞ 776&㉩C7jL;e*]'ԋڰүn,!+Ͷ5|\MO d[Z=sv;;|%Ycw,bq廒Ŋ^XŽBtpdMݓ`eZ^hvE̞W]{M螸Z,Y)ജynEGxaċu.TZ5IE_ϡMʻBQ~aw[i&%{ `YFa4Y8y+gĂ}@ѠU0YBh>^ k5ؖ+gv*¦U|)}RkÎ l@M"c)WЋ푔e_lۓT졺ׄ(F)BL_rCLK9dDM@N%+(5ե4-x+{UxW_km)%=kw-ǒ">8]Si'EbmHu_(ۤq4n`0T'F w^4,W`{7/k'_ʀ"\L@2T6TZ'ꤵS`0!gil/BJ傴.E&ΐ/ X[D Yxo00k/Wgzm:iv1>>foWLkУb3O=ګ} WozWY{x9AFzB zvYEî"TXY֮\Qnymu[YZ'f]2qY)G±3jɢ>*Q x-@BBRB(hx=aˤcמ9Z2&Ά1`|EՅ-"ehhj.uo]^/W:/7=Ϯx-Zt47IiTnH "Mt"S:dHy =8!x<_Ô̇#HB/:}Є %:sYj2od`PPKrpRfpn"#CH">z}(y½["mڽէ>n2A&%w<\M5DDJH\Ɣ1қf&" Co55mbH g^q6s"+Jd8APg^% !m[>SJm">?~ťdbzGR0XY9O7'̧8'9cy1L]HLN;?O\WڵL[D<9$_Nɲ5)iQpʑ8E1.8f47I.éLҕܝNn ^mffiVsKQE/f0=/.רōw4 s(:<\( [H/)a];N]HfD#Y{'ŏgi.rGS1yh-4(54-gQ3V+rӱalHՆi>]{`'{FtvSfQY~&Ci8j}`2@OYpB@gu]v=+L^'ee?mtH}:W>)+gKK~mj҉6v9^> 㧿c~Ou@#pBAWW ']tt;]Ff}Rrۯ? _.'LAi$Nȃ+P C$$'ݻ^NN¬LHBh[]`~u_Wu!Hb; е.cٍ{ɑ%`15mm˘f gF3<{:6.tbw Q`SbM. I:Dcp\;RsALK.BN0e6 !hՒpAݛ=aӚCZ!rlԽTY>0J̕jl| I?vіg)1jjK8U|_o S#Uaj jWaGibz'="nX 7BWcF(Zq$F NF4EhN?jԠ9yk9F;$ofmP$AЩXKP0% FYLA2J3M7UI3UhrR:YހNU1@AfɏWaǓt]59p[Pړ14'.MyJIZ|3(/Dkov&hZ+Q֗>1Y~v2\r|8a8Q_ΏgU5!I&#%c3$Z*H2>of(&6$Q/ɉCOvcT*zcYo|f)w|u"jɖu"Fjԉ^{yʶ}-r[PH Vdt~Hy^yR !0gJzء'BG1?ttoQN~m%gk,[JL.0"/hޥf#L(4̫H^pH'P!%AKp3"wH , 1sDðqmנlthx`'EO5-YxC14(`\9/4cxW'Nxu [o8ubhԛTZNAxtVNx[}#M9i=W}eo by9Km<䈲vڝXv'j:݉J-0k7zR#RW@?tuGSd Qء+R#کg$H3p@ k`? pt~P8SJ3b vO ~r.464<hPPJ~6,w΄%|nɋUa;{Y* ےi$Nj ~y}ry/BR ($2!b_g:q}4BKoE镜d&8I&q@+E9+„*3՟8a,]gR~ 0NF_35~y=n=y `(U1.>YF8f/βg+3äT^cxY?ȳ̏! ,ĺA6B>is?%ʆ1 MOۨ«ujnmf[~>n6)i6I>Wi $b]Q}?׭#mj/SF5 3H(򹖜N)bxЄ c^r.`9OF7XY&}\<3ў6|Sh_xWa(׵i({qIQ+R0jF >Dc2c`/^WY&m)\wŠזLJ,yr3Ӥ0g6\lY6iz|q"(=i\3s|ܤ1*K"'o2hnW&x7)]Wb&8l纰Pd}j~37jTp8ɝ9q=7Biy$Zhu/8TFpcFi0M6gDt雟nkβ߷>:j0Pvx(x`tE ;V\q82Q(_J߮^TQpI-SH:[NHm]gN tcM'%H*D4QBIږ6cm6 *r|97i-i)\=ibM3\XIߐb_0M= zJ䲣 zJ~PIp4T#RW@0QW\!E]%j5:tuTr֩oG]цKOObR<=ڈ`if>ڈJa+z1•+Z[լ$ %+erA8ao<ߙ ,CD|?l4L_0}X;<8(#3&N/W+U~܂S|r;ݳx;2?=qf0 =XpZ.h0w0ǶцyPt,'d)?pҝc~+([7 vd~cO% #g7XE;mc4smtj* DPKEn@G'=D2tuTJ;u ERTN/M> ML컿槗UT{*L&b"h3 @s- F͋h A[F@K(h" -H(8T"MKIX>` |M]@'E3߂px3|e=mP+w&y:~TIZ_`L |KXHӣ1}\Ɏ C7}Jڙgigݣ7ݽ-"z.Z$Y$ ak#E48 3Yf[-gBCT9F$㑅H>HԠ;ED b FрG!eLDr&Ӱ\~zHeMv(s fTp`Bm,4[jǬAr֏W(!D, 0i1%!0l 6 `E Dk% [B"1~47:c&QAsgт#*$;A Utȴ,^o@XW~hYDSYS ,ȳx{Or$1U7A ~$rLF[)VΏ|~:8?R)4ү2)[كӗ2}F+ 81лN{d/8ٛS{\f P}sqČ}=ܼ.^rL;/^N1}:oV: foŬ7d;YNMUjxƀzN[Z!Zfʺ5ӼqJdwrrYN2Hf'`>`.x֝ V',wCLG+(*Ik6￳`eX^"|q$[7}obʷֳLxSTx&j\4c|]Ml;BNDN:-*gnfCctimw'ޖ6NZ}rPޠ79 ȫcz"H8YM(U:E H*fj5L~oi"xar{cx|[vQxqc7ܫNuA/x )aujDjGKSx&o#sΔ {`7UN rtqY.β·G Ɣvi TURo qz7la^(gj@~2ih}̭$G!tvP7yˈ'f瑵E2wZyfFSSj- ɜAùTNbV(Pʍ"VF 0,"#aREb>Xk[B@!xy,pac#0Z0t/{r#A'>+(W<TuJ~,ˤ$*U+9g 1~ә}LSy,z+&BN; hBaga͓610h~[a3lblw+*D@8k26=",y+ΦLw#a堼&LI8FҎY ZbP@^#Z[#ab9MwȳZOw7gӯ>"W5ͧG뛞E,˽-yMRbI }f+aYiC(Xn,tH\,„Eq3H\E[Ň:4>d8X8f7P$,̾VWhi) }5z ph m~[q&$.ԴSv6}=v?O~.p;*?8%p,Ŝk7tnFIt~e+P]O'Ɨt] &^MYނ 0: f0bAlw}{#J^gnXi-آq: 3Is` G.5};twOp6T.豢QV4*'O 3׻vӻ_^9.˻/~xuq۟'Ip)hv&;v]t[MKִbFs_*n0{cn}A)NP*W@!WrEo !5!0 _ "tkQA~%os5e_*}B`@ݾt~ɕ%>JiP\C("fKp*Ƙah4)Y Z=_}M|3wHsIM^n֏Go1>oNX&=Z3XT0ca>K1q1T,"!0У윺i¼?idv/AkBH"qB: ,8P{Hq)*7,#iBruP[-/?S粽.l{u хA% }P~0>-$!~Zȉh#Jq-YR'9 bIJcM=U:82.דWQ80y08" ?)aPQ 1 uZ$:YsG/+;ꋖ0?ײb5^~>:DSQAwm=n m^d>Cy m<,Jw =E]ƚ.n3m3FwUf},~՜b~:濺/WzQA(%K)`{zFŞ6fOeK{ziR6{g54:(HNA7K 3-AsIM˙r(PSC"RH]k&XmL/_NgKjk"4r[[#n{7"Q>]io]'ݳ0֋Qx[n{.ןlҭhڼGJ~{6R9fx{_P75H4~kۮvS>ϋ[Xya<mvEߡϏeu矛hrD|B;VzZ]\>] Zr?#(.С諑[&67B/&3+Ao_ljrJ ;[ 눌."d/BCi=[ Aҥ RPK1 @"v* [PONp.hKSu_H$# Im& L Q% 9Y]٥ &@) ﶶļ<VNo&Ɗ9 T45!|I=MmASfz\QZ*mLȗǬhuf懺V&w鳽nfKze ٬]0wE"2luGX۱.LBG cn*,3X2>]4P4^H[ /,ǽ u^S pt3w5NJ1? f[-װSSQ=򒄋F_pq5׺K..f+zm/\f B,GJAR}& _DdcLhk:L: Pd".'q[x3\jr Tu5`4||LISF> &h?z&z8X>~kZyah3y/ĉ|}96/Ҁo yC~) -2Ђ"IMu3U듖MA7/eRQ )m4"HL uExVЅhOԅXMi qxvyP iMQy"T}\U F- d ب9ӯu&OCa}Ɵ_`0sps##0R"TZ躥d=\!QarN a RN 5AiUaSZ$XHN {}Q3x[b.":lSʜFô\ j7jڪGbHIpΎa e eN滠I"rQ9G=%XcC*_ZbH!m1Zsirb)mckozO=CzE{E,\M9+}nr+Seg%nqk7);t@ҳQTv U2|үx*b4ʠEKIaAi02+rFSQi%IrF Q 𛒍Fb֢hpi5cj RZLj(!%@t"RfQZ1=b_IY5~R-c8|@(xJBE's+& . aZ,7fAaK]q9y>]9 T Q*'*&IG  gʽ\usCl2r f'L1zy0Ʌl1,>vF>6gܫ7-Sɯ']'LZAWZVYB燇:aVΘii0ȑӅ d0DMg^"uO^1Ld]F@ S4p\AゐM7dO$v DQ( $g!I&2DiXв=A:)`hQ9; >s*U`ZЧ K#g , wn7q-Nf ][QY?8DHyR>:N2A$,,I!s]yZcKPٶU1%p$uzG2V֐"iE غBh3s# K֔~y;ѷ6܁-2]؜f>061E! uQ5 Y'tm3![eJ_q6q{{%ifAÚlX={/y͉-Ԋh mVj.^&[:wj%^h m: Rl`p+悺ZQA.ORLMGSgGm1oyZ-TFS3Yj!= $kc!i5(H e Qx"eLʆ5/lB^V eCO3[dnpUwJ~aolCW@ݛdQ V;a5a |C R&aVU?ةo7N&} x.醮&NO9~+yBP 1S*]Ͷ\^{ҳkA>ّ>Ov;f5.?%Է]ZQq-Y+ǃmr)r e ~Q0]^RKO>؅Cg0H9EQ Ґ@c^Aeau63 TF~a?7aۄty979zfC^JYIx3 `Dy'01E 6eaJ Yu @%#AE7C!|.Cӂlr2g|JQI$ c6yYdFY#n[kuwm¤>~uw_6 y5nQڽX+U@K}dCNFqܢb2Q&-DJAIġL G63O?sbg==>$2bsU} ]d\h '!&d/X+hA*fԳHQ&l7/ # ^xq[n2KzoIgj&zFi+3_]Aw}2_&-oTCL3~: r5hE_Nxd=_s']vՠŅ6ʇanhq}k.X'n-* _$/}-\#ՈZy]A|uqE,s[)jcAawxpV[[F"At(Ml1q6|߅X,IWb7.6Y0z"UdR|Y0KkyPw^Fe*GC(2OqMI<1$"r|` c/T~p;uIlț%No%#(x4C#7MB7 gD8Dgޔ< >w> nqЎ?Y7:$x&y &RI|>D\ C<Ĥi<neOZ6H0g_ƻ R" 8KN  יɩ(sTd"mRW>SfBlT>Ͽ{\rDJ5_nиZbКޱ7:SLqQ>0gP k^f̤.ٝpZyJ-.n;쐾?Ëf=տY8ƠVFV:kd[mmp:u=q6Gr_GcJAhoԸX](Z]+ 8&6{}{}o{N>o|nQ8m"H$ޥUlwVV~V9o wzK"ܫr4 d2 0JcIןaYp 付 Ud[r ш!m~nhcvm;Ķ-c QJ=JD p8V7{mcLrV)m \xe؟Zc;̙H_J_Q'R [*w_Bq|"KLd[>u4aUOO!o:3' QD$T^kDxhhZ*1I&TM:kXlLly֐˲g meVɟޛF]-5:\=x[IƘY0w; g(^1xG@^1^w /gTD; 9`wI~&%1E-.)(YGZϳiqve8͟14g͟=iP;=즹@SgyvhuqlsA$m^ժ75;AVd(` 'W2 ^VJ𔤆h #}ttsZF=#}KYXxOIHԲԁ%AHbȤ5A RK~D$]_$LZ{-v4RSLX=--Xi#1H $G}O@I3'|\hR`5d[qܠU S0լ txHx .~OV_\/oL1m#cP' J= wbN9S}qn!iQ#4j21AgEdQ> "X );=I&XIolVC}$&zNG4H;VƝY;g׮G;Q(pMIs4lZ+mc&B"(r!H7SNg໥KR^ ˋc 8x$mbN2;N~.ΤD>+vd}>Ll:%b iM`TYTN)Rʭթ}sL94A0tT4А{C<9"(X'xNRws863_Whhk~igP<; fy/TBDJ9Rђ Uj]Qm$"*NYK/y_)R|E-!u*j^OPI h2"5&u*[#r%ӊYr-mƛqO*\d>^#Z8vHHzॼxja7W$~4.+D+Ǚ$%LOш.GGGIA ^3(mMTN<Ƽ3- <9eٰ6 qC 40;,(řP 82G9Y@/MmPLuxtbۂᇖUnUhcaXÆr^:Y٣i&'Q l/3yM]:VG#6zhsIpQ2/5!5R[^ٓ[%dhk_|f&CY-bՃEYއ<͇BS3Tn[4E7X;H`ϔ64jޥfLlE!h+RT:x$dS s_$0a],=ԏ3eyFUdOo˖g6\] Ulŭ v}r<,7rh>[zp-)kQg.>χcR%e@GKBWp>9ÄWR&4@Oh@rLJ(-Gp8ok T:RDҡY?EŸKx$r'[4\K(p-str$OW|CM"8ǩ01OLHρi\ʽI\VJfZs$;!Td*$X1$b &TR5c1q6XR tC~u|n.P>~8tՓcx*;>;&2H TKNx/!"7ƌR R3P"^ei!60ceɥ *GI L#xK'-+%zFð`\.6:ڬfl UT;-(񊠍Gj D%L{^.$G{7*kK9Kc["4 r.+$( v*h61P* >ic-7LfZ79Ӟ ä(Yl$5E QJ+^Y )q\]Y!(\Xύ$!\&3 L_Y_6ȸ4uI:[e]*\%)R!%ϟ"R/"G"ilF EĐцtʖY$u R-sy/ʊW# HWdQ{݂0|~O'['%8 }pE9|_BIyS%׊hn9p>>sh||ls۝)Ӯ\yjgKRĬ U&9:1H8z5X2X(cĸzzfixJs#-9oManJ<9IG#^Jpcja:K4G?X7Jw2A&%w<\M"@Gh%$.c E#\ +#Nzԙ9oRižPjC+^ يK88)N6D_㯻8^jF'ot|3r,I+u/?/KWI+Q˾6&˫Jhc5˫$lrZݪUm⼜@Zs,itgxYf_R|r<8Gō s>w=;7m;PDxsk`lٛ7M^#5bLAmK+|}6݋Պ K*N+5|QYO1R^H}9RWw[v.j҉?Džu۝н<'u;}_ޖx_ǷN7xsiU % g,¿wwi;K^}OM# 3_;_D 1u (6;qtb;L-idS B0m/MD 6xQMVy{ys.bmB0v@`ٺ۲*E`Kl!6f+p b ;S0zum ~lc ^~FwY ̪ Ɗ.$:Dcp\;RsALڗ$9).e6 !M=VPzH+$8^Mf2OZ褁 c1T2JY$u ,4cPlBfkJ'&u&meV$|rw+Z}VK=SGDE+e`iHiUK I4 9^n4<4G/82%e%tA.[B$!BQaD+'a'UxV\OF/i}!!7 pQR9 2M>]҈Aƀ$g* !F ,7pJLef<Ϩ*SS9tGs}cWƤwG#6ZɷsRD4BUd*8T!'Pزl=a:x/ ʊqJZhEHp&`DN^7́u$9QxhP{ADTl<>mź'욢'E7eFȕCKWKptBbw-O@OWt]t5~zuԜњBgϬ9[.iB͋]ռwJyx} J{r> Mﮎg&:LU=r[^3Sin"tL̓kxZ|O`5[jo<2 xycNrFHfeg*甮Y#$|lGrҷlGqҷy'3. 9+ %,X4!1iDd:#w6F:Z>fk섍 C`duKCiKQܰޘ8]>}{~Is+sj8e }6)ajD-Lm0u(MALWWyGIKshKS^ ?g4ƨPy@,~gMٜpVC&4l啷Uoh GXXKh0% FYLA2M& ,O1јΙ+Ȓfm˸.[„ j7@P9i3>ܧw}{RYW alt -S# ?kڥNW@jUa Pi4m4c-T&Z {$ w-xܺ bOB]c(Gu?Z8vQKZ)d`3>hc=y*(LGKEH!{+T0ZB3`)drFc" dE!~VƍRIDګaiLC4 Bv\AdhJO1Dj|Wi|8W6KcЎqh@ i*h8jr&F-#9=+Kn.P6mM֋RĜ5Q e2RM[YoW4c+[Q-~…hZ~77 }hy\7>}[lPdAJ&ˉNߜ$d(Uj7 sQ0齐M dr,&:Y01YoCڍiǶV[4lEk[[! 9`CVQrF4r^Kc3Jc: ]V!7g  r /:dĂI`SI#OrFrqa{ؘ8aԗn w"6ZDٰEEl-b#i'Gmֆh<=2>tȄ!c3B kb~fQ/շ#i?Ҏz= hQlaUDm ٞɌ`VQFf e61x> ȃ6j%&ܐOQ599㧋r M4SlA'!CBjq/H k4TҜ C?%ڤ-838X (zdD2㌧B V f/n6s771IE+;#2  Ev]h1qփ#n P?Vӓ:~ fyb/ӌć Ϡt6GOa)R̭&/]jfOZ?Jv62x'qh&%YrpP삗sm7^@x9^׆!\=qk$= 4ί9N|wBO>/X4Rm :ףX?}"RO7Ar#'/rs,QEZ)=H QP#2WE`\XUVC7WEJZs͘+a׫ƅ/~Kw*WWyXcq N;L@WJbwF_?xE;W @,4wmYe&m^lH8|1zJ)R!820}o5%IJlRl+R7oUߺuN=έw8^zҬ߇Roqp3JiDZo3 al P6W cjv([iT@ъcCM0eṡgkOe+p Õ`lU6Ws Wh&U^aRDHw*,'\es59p 8y2\r^O 9p V3w5g|csy\pg7ym7.|ŎעJ.OMRYҟ^nulmV:Sۓ,˝=D- k2lӋi 7jUbEҢ/nR\ F_}t`P}=46ru^:<}$϶g j{^zX]zEKv/" J%{۠54>Py:uB3B3@ZqֽWK{GiⰚ^^f:[.@IT21 _%g *69O)>&9SO(H scR∷F 9Aʜ\T`6ԩP!xp!Kڅhp`2Qf$Tvz5gIWFs8шGs, Qbj.8BȘBj2Ȗ)}4[Ri6bo5!K3J1&;Uv+s68hrM-mz >B*[8EM,,,9ܼwI 2>Ǹb($ 4ژuN`n"O_>HZГȑ{D)`O5zIum:Qg8"Qh$FYQz}g~ftZmG!MF= unxcng^"PMFs~a<"_@,:|TRA`YRQÜpIWxWa"L_鷄Ԝ^]hU=c?9N&xMjnd6i :՞Mh=Ek,j̥7 '|xj:wA͍{6)+ jދ {v)1oDO"2ػ5unFt |)lZn>6$_;yӼ1E(XԊz50'^7 N^<G`Hu͎&:Xxg3 7~b`M$vglьxs󚧠$H=WDiJIe5ۛ_H*1Jd'ۨojyNk;ƅ'#x==Svh9IfIfsGMoB7EMըIEMI5ɢ&Y$dQ,jB5ɢR$dQ,'ߋdQ,jEMI5ɢ&Y務&tQ,w(XΓL$hAƊ;'Lz)qiKb u=ɝm{=sV -x2cdd:88B%IPÇ)%y.0x+Em18(l8Njb(Cd  )c ۙ9[j/-`,A|i#[6e|fmx|is=-AE[mUE<~guH iCZ}8H HyI|DytGqo?дMG!8 5E/\2ɫj΁` #{xErieHt|>Ä~~6[q. R֋ME{?Gdsyv,ψ\io^)#F_R3oʝarc}m=)z)a yxbffy#2&y .F2'ktP'U>uk.׃Mt1@#AL#%#C^~3V}M?:,X3T9VlP)R1K&ך)tv:Xt/ER-FI&"ME @N܀1)TFHk#6(EbhT;ruP@X"!(O'ccxٙ9[Nb,]ֲ@4ӎʏn6DNoE- ܠL?Sۯ7ef nFL2MQkMU76͚ǟ>e>\?]0cvIZ[Im\}R]d-tzFkʛ_ݜ-|~4Lۭ;>`B#ȿ-K^pyn.mʊ=?m|(6!SeVq9%Dܼlnm͵ taJ&+VU Y  q}E%H [l/$tO)d%uKpISk" RK~D$u,3jH]Ou42ָDphiJ#A1I̜$˧>/TG.!s Kvh4\Sf=L}`W #U93}6Bh.0s.BZЧ.䛭B!_ʘ0foBƐ7+[>s$#4hy8)hD;ICICKʃIR$B{ͨ&mMTN<IU"g*ϧcQvK,_o~Bщ)/].<"8?ZY' deT>xkz;.SCO) <1T <0.DX盭*CN%(at\xJJMd?{WǍ/ l|)KrI,pX_60") <)&+ΛGF#K4#SeT*Ve'HuDE'/.tI󁼝[%0x gpw/]ocB-ì+9t9m+|z*|*" h|$^`s@[3`",JV^Xycpvds#ՙ kd0/ںq&zc|UUݯmZ*&@ 2v:S} yUKJQ/SE7Vrrnv-٭ ]5QjS{G6K:ή)%k KU $:A!/d$dEL& bZFTb5ruwVm$m8bI}jlUޚS)J쯅 38"D(&i)E *Z|(qK(J[߾E4Cfd(hٱIYUY!Ya8S;م6n{8wW r0El&E"vx+!}Ҍ =RTUDTL.S{bAفHS@k"tNPȬ1QP0"l@j؉Rn ]SGShemjxy!(| KNQdEKP0Zϭ®n.o@)KY0RP]*rC>$bNB*PL.h^c~Pױ}1%J&\ dh b8$HVP.X:G:vn+s/Z׾膺w"mop3b/LR`ɬ4'e2@|@b0D dUsv̔c1YrDF+h\S-ƲO1kv~(m~e A#JIbB/|2Ȣcv PR>YD'c"Vù%5Bߚz88S`ZZa~1LRfvSvDYfûcGGσ0oҾY7Kfi,Y7KҾ7Kfi,훥}o/!aAك!%q(u;)i $%U %q=v܎s;znǗ́t:8흲SwN);e{l/AutN);e{l흲%B=\SwN);e{l흲SwNi:e{l흲N);e{lvN);e{SwNbl흲Sw=JJx}|(LxG4]שiw I $ZhW1hTZ.J"zߋ޸*H:uYbJq|V{mq6_]QB{hŶ#gBZJk_3>)kɲckS&7nd> $?d(EF-0Z<=a=bFVkT4ԦTpPT0VWKьR#eҺ9*d6_D:4s\Ęr(:9A"*uOWol8w,O? дx~}~1rewwk\5i4⚍*O; "RAr,>;EWhrZB7A9ɺߖKS:Ki2d1Ȏ^ARXr)eJUHph6|#zz|qv6+X>Vhꐝ}E,_=,CQ uHB£uSQjթC@~r<<7#U(X8j#0 σ`?}{:@,;f #c>/ߦbP)гWII h-^iփ>!S`h|= lMemWwe_0w#0 U:%>v9͠JPuv1:SG/dJj{_thj=dU[9^|[]w{4h_m3YW~㥇=%3=]L7~h Jx J) ZuK[GUb C7 `JKP/CF69sm NIlA;ȃ)F&1}~,Ζ(v~2ݪ:>;,J< iRGl(v[%C2iA)&DM$dJnxigYVF ?\mQu ]hLtf/ L vTh\Z%G>1r֛,35R$+XtCJjJ25#scxLfZ1ޕ~^[tp`hנ> Fq:1Fx [azF셛 0V1P8Om,9VfC[8-2,Gh& l}o) #a@n&2傹qe{^t%"-/|!-:mU{Gt7]ޱ{NbuÇ[oٲ n:ME2 _ LCZ#[KxY8e$J8):2@މ}\d}F_o6Vڶ*]NYVgChfd/Y#.YH;uzniR9yvy/}ڰ1W޿ CC\ꀫ9J^[LG?uzvyAy0  qiP AfCQY E30 ۀ[.d)OҼp{6x 7C סJ閥CgTAu tc|1xbN(OA!عbjGK"!jÉW7}𛵯vJ1A /dWj@ ޻ 1u@hB)ڧ)$69bLtX@!%HF5(u+Zn}[_|iY1Yb3̴ ]iw"ء@o}VWEGF=8%Ah$exmzwv;G]?tm.]Oy>:;q#~m~Y@105./18G!-U-1P'U- I6ɐTJ$ 'tDO-MY F,ɋԚpc-{c:o֑hॕg~2zyz7GT*k2˖%/[)%#v/eţ0,?OglV9KVژ1#0<\iѬ><(:9y[(O}̥g糐r: GfSXް"<9(:u%??&z-/l}?u۞]Z:ґ`gTo}9eUjDrO?aIiMF q7 з/&}Mect6zضU̖T5MiDIuv'}c~w0}$LGB+~ݬqc%GJ߶S'RXTZs} !J)) E:8u;Yª_ioCф("R* WdG׉k4 a$!^Ӊ;rY6װmNQw=7js-pzx+sgx0/$#S.] J #YR! fXV頹6TPZ#=#r9>X{纺ZDPO-mZ{ʼnT,K\t?iܭ K iE(㼨%$^F.z;k,pw/\[qA^h\k뀯fO3ȋa{wސupɫmChxߛowFM/+ ]݋ꮎul2o `_GM)ˋ&&s޻S(zx0Bm]N/f QfOWЂa:p)_'>a5U$+)T>EJTL끑[e$߻b%KyoX$A(BhdRc(hO2 t &۩´*Fi&ruP@X"!(O'c7\L-O첎9njegΛbv}7Tho%7mf]f74ݮ7==;l+:үV%KWmQP SvIד tUswA捚vu{嫛Kwt-/ݹ^fZBmo&snh8Xt'KwL_kBw=~w!BanA?g_g)uNÛ|Hb+ -*Эh12!;[#>Jn\ա}<;Oj :HoFO[^3rԃ%!J >WQ餴1d]O[ twdz2XqJ<&fy֧B8%<]eDG˂gD t)Ir2D"(ӒGؤ,N,BV0G0 Jf\$VBX` F+I:<[J0l9(%@> H$6BH٨3uue0 ޛI_OТgn?J 6 Ug7vl/d!Tˆ t :zf<%1΁ Xtd$`sޘyTd@3'c f_9#}p[UZcwzpw]͸P]L+}eR{fxcr9hCw'`msq"ubj=U-!.{vm15ikepb/5պS .,D 5˻v亥4?_xI6&ȑ^!%bpQ"رBi (H /Ew Nӹֳ:,W0<ӃP6%8yG3j Ϲvk \*="F/kO-w]/!{¸pmZqrb!8' L)2tdY\БeiaȲ td A=B 9>Cu>;;IR ꝫNBekr&jaHK}$]P͟n2ԋEngCUO Si@dR:x 0 R-Dγ+{gLqɂ7bZ ,qU$ā}Ҟߝ t šOpY9P\jCkĤKJ4Xq礠 \[A.K}qߥޕ;13;*dǣ`dfnwJƥX,"Y:k) 5P.%'A 80et 4JQ#e ƃj20¥Ch ( de (t% Vf˨Bp{y;ۻXÙ/mny-Z.* V\)DHA)XNe&JOC;_u'PW jg/dOfĢ)XHڠfTገ'B P 7{!Z2at?en:$x&y &' ZpP$V"?R!Rb4QaPGQPK6,+VxpBXP'?G,C:s*ʥXa"Hw.)3!^㟝-&^䈃-l IhZbћ˻cyGC<M#bd}jMy2Ϭ?{0?r{O _xޏVɼ5aQp@5:ys #vpF>)v>`~F %PuGDKK}1~-ti_r~Ii<9oP$Iȕ)oc>0 WPrԽ&u<an@IS]jEJ|~^WÒPn@o_M>& >Z>mm-éj҈<F1N;0r`7`.i+wHĈ@+~ݬoُ(?R:@rʘIB$X+,(̧(2f"`"c܂u=Hg z٧.sS ODH4J֖+^т\'0UbCC"P{]L'e趽;E(jި1W`!q⭴6|UI~)s|TiY&U:h k:2rȄg{?,C 8")AK֞?ZhD/pDݣYgv:rsNdo\M~M1@ǁ2*CӋ|S}Ik3sMmlx O-z`$V.MXvcR^$ZHL DL`1)D*@e5с,Ab1J&ijb (w[ %QMnpR9& bl,'H瀨ng]nl%λS-P:_LxNVApnh]ozzv^VuH7V_rh K:mٻ8ndWM|E.6E#ljŞFf4q^V+cb:&glR1z9w}^P`32g}|u՝.n1ҕf<^sO+~_(٦}"1KZf/|/t}5= ;?!?,:}9srRPܼ͟ӝ߯?bjoNwX zyo3$N4$HBǛJҷ챝$[>I{O`b)r5^$V%w օF$t6GLx$d<eHD%e6ɻI! BQ"'ԥq̜M$|L22b}QR{Zz{=XM5Wurgc`j nGibz %ңv|-4;HdP e_ٙľ#őĽqdJ$#)iC- R@PZ1Lӱǖ2EjWcmRiEbJwܻ Ob!gxEG*ҡɠQX2*S"O1vM } cYkZo˗OUIo{!9T~ iZ~Jw*oZc~V^ܝP v2󅘞hR7L(#X(Gm"&Y eӥp Wt"pBbw!K^(iyVw>zW}Va 3ݽEnm˩3(3*._G.0.ɖ[6@(Et%{n(չ1qsiU'Cs}r8:*Vh9,F |8?^/Ce&P.$K!x"1%8eE-xGAX8KM7*3TȃM!6+eD 1oT63!ojFaAQɵ-[C28Cfd(訳X sb;)"'z863x'50KDlFD8"!"q+!]Ҍ ypRBTMDPL.;P\UD]2'ymKQH2l2ၖ:YtL`32fVgTJey٠,}G8.NXǴKq\!.qqǺg1H[_("l Êhm", #xF ?[YVTK18e(2D)( 0ZW?E#71 o,( o l%/ Ex_ʭUϫ8cc6pJ&lle.?I6DF#>)icG4nr[NJ7=^5F鏠zArA!wF3o Dp9- V92tut!2j*遘9f/ % 'JшO BRsH ]r&r'ܦY5Q@HBzYtK( ]@,lOsr(.le7Br\Y6,~g~s'ݬ,cJjvT3Or,nt8à# ɝ)l8Vj.⩰Vk8vVSVWjJPO(\N˓ W\#O%\UkI{"PB $̯1\9 j'd•G p̈́+WjZ+Kz6r̕ǻKoN[I) wB&k~nt3^79w$}Be>}+U>x~ ?\s`igYg>2Ti~HN݁O}\cMU ?_U2tθ #~rKÆ9[k3ZHnOp=}U_h|:oԍZ ~Ǐoȅ5}Ӆ?On/dC1 Bw{s}Y^V{[xy]T/fՁs@7)EKy猷HNy, [nuEtZЁw9JWN}IQT O(aS'Ѱ(ũd4ZǞT+2W(c4 Wl0H:pUt*Zk~Zh8>v#Ռͤ%~`XϖM\?կ1a.~`t:apJw5o7/h.vgv'=O__]sV̻I ތ&K(\|y(#Bo$E2lR'ϣ3S{.XXcߜs<^t4~p}Y}?8wo~_~qOdXKZđ0CyaukxDxJBlCs2KT@6UAIRJ^#(Ay$,j)GfOnBk*jH91SqVB9*Ĥ%M$/Kkbuť,/)Bxr^zԗUylWw/mАSYjU$W1.>Jq2sqȜד`i}uuo3φ)gOs'q?`գ>>\?_`{?BpϞv۩.q{_J4zpT:+4,4R h4 ׷^r.SD/B|P(w(@@Ѿh 7 V% !Çhl-DŒ9gh=A)ڥV6>jsŘPk] e2z%oe ]TkV<ͽ0 –U+݄=i5d'qp '(/})?GkmW`E ],Y B0g7)gS1QQS#J!@S VZTh E+@@*Pr-b0w5v g4E@HR!8 l*"%g%I_w;`3s65 Ɯ>k|}%p6>q,Dִ8~=kw+Zz49c3;VCt׫oy5y⦋kCˋ7_9;]gyp3wmWe3Z⻽Gq8{5T ל7~馔̏vXxw^niJ٣!gnCn~4yW_pml~AnUZ8a1mR ;Ґ"IgGJҷ챝$[>I{O`b)H6f2A$KVp.4"?b a$\D I袤&yxЃGg uiʜ (qhtO]~zs?]~{=XM5WU }iO>n=L`kÀWWQS'TRQJ5ԩj!(ZP RQֻbg\P1\}c#u0 6׍$;̝n\y趉uş]лYW@Og| FcTwsAHdzjHd \U{~Go߮x,?<}nT6"@i^8cG+eR)T5nY,_x-BLɼX;.>uE#o%%e{,ࡐ٪ћe<2jSG|d{xp䞏GRuTt҉\& ]%hkL4dS.I ?}^(g.MHJ"B F`xUPg3.-@ZU鼗gxZX9:}G<,V\j3OiԢRWDh_%d$ْ}Cɵdכ\K]=qo?2{<-уd.OBIdB]sO&ٻ6r$Wm/E|,s FdcI<+Z%%Yl9 udOUO]͛錌ś4Mwm5KHrX,z&0ipsÍ74!CWC4͹o)v ņnN.]kHYnucüFhc5h&tuhᣩ71صv~⌖tJ do'K}2_W~j/|=;>xwF}9|n{E$j㟯gm37Qd{nDo7}1',XVxt] '*^k߳rH^6-ttH|~\S_od~BbFtZOd'Oʿ?_}xf`B!fO/O>et~z;t}Rv0oM.. G{ZnL,9#U7'q9m53!ZB]mkN vר"p[!}m}Bd6HOuOuYlN5O8+6Z"PbF1 w0o N2U!J޻Yg,/4vECqtrJ )1i/9).e6 !=6qL;$Yg?#OsnC(+3C S'M@t@ *%H,E`^ ,4cPUiv}D]1RM&6F"RtycD^kE}dhw%2t C_&<T_+*Fg%t(+TV!Lh0DȇLэyGG7bnE& : 2MП]҈A@?9+xTT/HΑn,@:B}q|6$OiHgW#=Ґj!`.h| e Ků;6]o{>tI6 hqK:b#Q?m/\p{J#g8cFR{B<3Cw`&J)k)A{˪8 HSX+`y"#O3D™'xeXm8br2:7G:`B  .a%qc IKJ1V|DdДz\(/)g1pD#yrY0OKPd"$+ N?v B]= -!1 1P=(ә>b>EMv.[*rT8}QWE;L`2<eF,Do8]yݐ4Motm4 _h;PA_kFo#lh:ըJJ` ]5(@b8~J/\%sV+8(2Bbh2*Gt(Vry}g t gKGtj g.u?ۭ9>ͮôE)B.:z!tp0tRޮt8JY$ -+ $'/k4goɪs|G/ "# n8t6&G8㩜<g/˝ dC`RfG(H ()reF sE`.HpjUvpD6jrdQh񧞊-OMy+2;ɧq nJmWnxnL3P0,D!M> )н;L2%] K͌I ਔv+qhi-[vc\]ry)䯈,:2\e^ }NGO&:o^!4ێNFinU?jԠ9iOkԂ r8W6Y(o2OٗT,=zWa]O`HO#߾TïtIQS*kF$te,ZIk{R`.0pi0ܦ_mv=hR@k<|'3 dcjޜ,&}HW?7O>{(=J>хŝ{RIK9\u1e zY7K׫Wޅ<݊lqHixpFPޕnvq-ܨ>i3ꏋ*GnIdy:V ]NN3hw2/i|GM_QZ* -9y9lRr)i2mJ>b&setVymЏ}gqxËdfn5)hNÊ$rDm'L峱HrגEy V$Ќp2}n&i皔"ېTFI+^~ɉCOzcT*zc!]EUyG(6]56hkɞij&V~t 0ɏ!ɢ$vsWmiB{N_^#C8\ /"Ǡ$<(|GDYMVU-Fc޳ATLO Pѥ93)j'HնՆF"de a5mᭃ8Cg]}vp{f?q|:-6S\(2DZFf%:YB9K¸L[Ÿ(34EBb6.9h&f69̂b8s.jW}l`1[! 9`CVQrFi &f t>Bg  r /:dĂI`SF4NȩNƹp֩_Dx,|kee(8XM/QȒrARya@%~ٖZDN^a'U9a N $Y.r'xBIs#K}V#xpvH8K=i)1uV}la]$b'DDxG;D6)hօ֒+&\Rp+xx,t=3&6_`}[ӛ,]kWq %fyDCTBdTt_zֶytb2nf q>Ƚu 6 ImWJ-!.@EÎ.^eGThXPC@NiYYo .Y-M3'Ka 1@qS3,9\&j%Z8ɡL#9Bnsx]h.&_p,%h~{yunv;E"Iu -΅ظ{4Z̢Cǘ"1s)RK>dDB$Jyn qpe\,WwdKpm.dUplA`N##ґRjTv ^=Lfګկo6rO10|!BYN>{ x9W#Y54YP5ΐ[EkuVC~ݷGW_П ^>sA}.#9}Nx}15Bf f4͇E@71:I$}=ˏG`fg(+~AOM.u笱ɑt։UPKɲ11 jdಉbE;1 `^0d $@,Q&ɜT2.(c نWdR~2GL$t2~gUu49-pJAwlnd]m.$zOQ r[D*(V:ʡr$O&0:gr5;5=,j}(J$%BȮXT1CĪ\r*ٌS¨ <0o_rzWirMlhvMh*iApL̃ P[U}˚`&?@E\x'8J%aJBB)+阦?{WF _dq}7~ł/ELjby$;3[c;nI) 4zY/^x2H6ɁRacf6{. TMQi *%wl1 wڻUYl'BLI-ZW8]St `uJF~_x[;&?  ?a4;Y'Zo4qQ<$\O xXtzKffzGH׳~'-{z59 =- u+{i}N6v^ i//=6{xv!~a;[uƹϷyCm1\,.+p_|:YFgaަB\hߞg٧Vԫ#l{§RnہjN?:ҝ ΔW;E`:1s1bftrF9}_ t 2*U,؜Jɐb5`B%R1¨6z6I(>" 6mp^;˔(QZqhe%[')m%ňg7ZzgY$4GXh(Ï6P![{>NN !)kG.̹^,2Fg¶}Ѽ;'d5}VQkC62IyYu1fHk 2ix4TFKYbxyI)Ք9 ri PeF JCSlPm tnnf,a`$2\=—D,P+!(btA`KʃetlL:"DYlPHNfKS"բY`HVZBBKIFc JckϽj]Sk+dmË-A ^x;m:+ΧΘW HP:;aHhrZW)"wµڸm{n3'2.Ϸ%d#YH"4sqqbG/ÎZ룱{WϿ䈊!/+P]*tV '&Oh"G/KQa*$ \EW RH'BuJaBH1&.k E 7-%{8K6ԍ)'P u6I(a޲p&U$G1nR&ʻfoĠYMo4 Dm|!56p{V#vo^ȾQNzy45b9Ʋmեn2ۡ%y~o: /=I($h2*&PNΫzb|`W:8@HIĤYfϐuDEg/5{t)NM3o5IP K~q@zw,aX'BT(koeW")f|d%/;:ttt ]Y]&BbMSɏ!W4(K}2L,"&9\!&KXST & XYKKI# D#JO#Hs 1z @RwE2D뢲wWVSRҼ?>Rz|dC$cI(/Zdke*+5<{6$ q m5D柍?e *DYIF#I8݂i)-jл|_}KfдqB)bJ~)yN^I`;3=)=p>S))%YǥI)YcUQ+ lҠddيM@PuvmjjQo&4.UV~A526gf\a1F3 ,͞ o- YZ(Me'ܫ6eZ;>>].ׯ0jB[ΐ>hk U3/f2fb+)bkx:FUMm] ;+l E'$*"=dn&nĎ| j7jjBnuCT5< "|ɢv3iEIm_u!E4Bff(1d*R,G(bf9Eb\877>gǂǡ#qBm3^ fYPK fz0;0ē?cVgu!6jBSD(c(-؜d,Yi”ؒVtp,qupkq񧎌MA1u6%i0ℋf|11 뿊A3s >[[3AzFP@kzcX rQ%wU_#w1 WZQN{8Mޏ/h1~*5L尦rXS9Tk*51#0-gNVؙ)UEw*XZ)aU)a}}X\Hx9A}p=7:f}Dɾfl:^yC2#/e ~ ~(㊆:(յh6RZyP\[WE*Иn%LeTުj8{-bH/5|M0ܝ2ijqq-ׯݿܻ7.\[9O+yCx "EɩyCuD›do[KSkԼ1ɛ0km-sJdu`*Ivk&n ~g=^eկﺲΓm]1mF Te9x17%o첁)bn%C J=o|v$ΎtO8ϗT=Is) 3&(f@*tjB Bi< / %XHtI3)&9-N՘l\)eذ/9xr˘w}~\2Ovl~^-\[lf]b@"ۭfrwZH۽1ztz?}D=u~ی@h4vEoǎ!ǞTȪy=&;ǫJДnK┻ˋ3d20K?,&뻺Ev޹rGM7`kMh"S2n к#=mg&識\M\6,|򠍍~`7ϝw,*D 8tKc=r\ ٸh:W}d)E58+eZiپIN&J8I s\in*WFI ĿweLVZjA2zF*^H$1Ƅdg!<,3ށWg`*ݒoƒ8&6gh GU'TP_N!/BA}ŷYNIŒDž.ͻٙ\vl DΰM"ɚfG8r!$/+5%;H`$dm0arykcڄ},1Dz0I GTTg.)]3qvS( ҵ _[os͸lR<N煨[Q޾@f3m3H]/fb]Q:"w VN: DCv`;,ph#ˎ$g9~Òd噞%3ub]ߢ (6obϗ MW>m#!ҍQ7WY"]sݶ9壝_$?uKYnfwW.s4opnf[-_\#|ʝC۹oĶ7+7yy@'_u٤/n\:LpmC#~;9ʫСb[/ӊW[J jvp}řNt卐Fg%4+(Oa/Hwe8bOb(K}#3>9+ ߏ,LΖ{ #kI˙GluU#e/\F yH\EE3=^]HAvJ0LZFz5q>5`ӳ[Ϻz~8<7MMROSkTw?M]'0 ܂bIIfvCA6`9YJ!B1&IS=B{w"#=njs~<ϫn; F!sìۭn/gd@D1*a?;ujp[ wHFhc3t89[E\k@0fA.6/i8%GT0 4{0l3үv?gW~j/|9=n!F/9d6]Fi݉-w''Ig]u>fUYDL8:o1FpV-ǧj1ГgNTꢓ]O+l>uRN4p1GʃDZ4}9!ʃ#~|vqF꺧QM:묱 ^:~|O?_᧣|ą=zwGVL1D]"hh(c֣GNW:Koߪ ~QtKEBH 3 ?J INw6 ~9 ⼉LŀH ߅Ю滘5f;ϵ,bCF*X2oa^1s#lN598+mqAF1?:&CğWdL,ETbJvn֏󋿒6tD2@Js5z:H.bFcfPAGo= vާE=df1T樵zia=fL 0I M1C*P{UɡNXA꺕m1GQ{j5?tju^ XI۰FNT#$&H)R*ӧbR-_=`@^ ƮR|{| <мOZ0*03ûϧ[&G8j8}G8PR_{~[-gz3B=H\-äe`aRZ_0mӶ]_=w)i^ \JW$pU\aW V*:ZJ8URBX_qMOZV7%r&Uv#`f+ЋeUҿNg=#:3'3U(7|&vEwvG}MI]?&Yyt?t%hAi2%'N <ehc@dScdQ1yC~hήKXOK?,{ߵA=LN!~ kk v|^)֣lTЀѧFQlH/Vw |tnҭJ\pהQ)̓gS9A fuRM+VpYS"6iYvgj'MK4.1qLH@$[ ^Ӡ8= >=xt+/)@O`a#g4>۔2{Q)fM+k{z)H?SgлBI&Ss&Ig-Ahx{h;Myelf#d LYaM>;g7Sʝվ~s2Љ Y O<ѱHt΁J&Db#>hSpQrgۣ96YT@ԇlZ>=~#9s+^@޹H?~ @0b Y0Û4ʆ6Fipx!WmjZ B#q}S5{ "-=q)WtԒ4N Rten]}s/cANrOƎ^᝟6?*# $kciYilќ C?S}09XNVpÉXxH6lrҞ3QV;Kɿ,uL fxFeʀB+/|fDBWDQT&GL{gR4菦qLP%XW.Fzn>{nN3?Zf41E-+%LiҥfFBHq'o;go_A3Mk4ޭl8 5fo{ŔOfЌqaٴfGXrQ(,-= r D暠kP"D :?;.f\SvT^GԗڥǕl j5,F"Fe:G\4s&NeJ:g̩=4)ZZ6.]ٖa7NdJ?]S  t_Y2?њv0>MOW?{7>?:{'=,c|Dخn ;wS׽ܽ&}iv ѳP1fzJ.OK^qp7Un SzڸY-TrHg5>ͨ6qt:WI3 Y=lTo@o@j 5svOu$ޯ65ޞ_(K}myʨ!j`DQh\cayu*DU`CL )@.2KəIsNz6A; ^ޛމATsb -UƁE)aɄ%y VVd#0TI Xr:pM3 *[I?+^~JE!$1B*i-pT 0k;)gDqBM PڛdǜyjD)t|Ֆ)x!h2I'pXo69blOq>q*o>Dis\QnGcHQoUUdrxk[ rAЃFL.1ٜLk3Tmd&qjXXM3vB"J&.- M/ 38]l7GdN&q@{nhpxix:~HU˖9ɤFgJD3g/cc:!!YYU[Z{> &CRؔD AC2%ɶce1hleĮ&f磴bb jWӎ]Q[TFmѣv`uӳd"=G0UGrAK%1E>YicY 9%š,jFMKQe3,QxXMx cWDQ#}/wI08qQRa@!)ATFˌdtX10M"ר@r2 sp&PF,iFzCʈXM-O^u \4ɵO{մdW\Tqz\q>1";*:6/Qd8.i]:}ń %;Sa5<1J l&W)kWq ٔ?=̍u ~|!G+ON4Nadž͟ɉd`\f24nO)ф>(LE+[>¥b^C˓eZ\ҡv_@K]{sЮ^l{BLif33Oh̃J4B1}(j8 y|k?BcXPr#PltD<ZFV*P%beo>"!yk5T*F1:᭶&|s-_mKR^r*yvJk_Y_:FTehW]Yema7g< D yDUaMh*dv?*[^Y%s)LWY+"W{24,^g,Yn).^3t ZV[Hg412gk { {#禈~ n;1h?̧6hf9*4\~(^EҎjƤ #WnJ#cwѝ(ټc K4#lz,S 4†਷(˂h.ĒdDP%<9Iih,˵wS ϠxHxwp'VEcwރP6Ch 5dWb"C!is@Ӏ )4EK__y&׀-uXMu>-* c 88jA0t[a[=rc@R@O&9)vh2iT=4 HI*cY5Gb(3zF+I:iGdsA;sOq|fq^DMp%H$(62N#Qg8Z&2P&1ʊnշoݢoXM5цᐍuL=(j۳ b<`%{ %p> |PLGc}jY*qp i\5pI0 C6[(*plrN$ zvU2 #Cxk<= tl}s궽VKk$'ѥ|轏23nde$wnMAoK^mB^nFM ;]M/mI3lБ"tSF6Gu u`[RPԓLFq[&NՏqrz& DRoWIDmc@Qp؈D̶IG[ꎦ79q{lNJ;tU}!Ж&Fw[4dWa@j؟M~L|3 K[j:6Nмfo: 3Cu=T#:n^נ-a)kZ D:+:u%GN%:3FĠlGn~SۍcR2zg _E2g^WNq[Ԋz50 $w tvCYT12qZ:jH+P<]Xg LI\|5.\;5OAI{`ޖJ/KO4ήƇܚll[LS\LSoҢ]x!8J90qBTPࡘMƃdrLW$Y(ך|5UJw3+YP/q /!wkYi|tcA7lIL&$Lq<6-%N  E OӚecI80Doh;Yx ezCD21kQ\kB(Ր7[[8scc IB^ xZĜS?SRڎ(]-}6 >K}iJ5i1t M3}єKҺ3,hPw1tP*R 3{KѳO7\.vܕ hǣN)S|#1(M?QDXNa]^$\MzGE%.o%#(x4C'7MBҹъR+"Z\QLJ+@\Js]@aЛ/M_$/ \`Ҵ9lЯmrCWÒP6v@KH#|Pݵٲ9URQBI4+}c~t0.8V<hͅW1X?ӋF?IvX!xK1׉`t2(/vNݓGG暰(_!o:;' QD$T^kDт\'hz*1I!T+Puװ5!5meQޫOtιz'(|0,'$[f<)h!J #YR!fXV蠹6TPiZG&!W )}xi?z=ϵ!"?QjyhGk ƒ]}+Y[ H}v|VM\/&} !W(Ggb>~Wlabgy0Ϫ}(ۙڙD?*n-:EU~nE}kE?1^&;1 RF?z&q7v <ĥK7p E>AJf=*r{-vv--]ʼu o#>mC+nzkn[Es^Am\׋~꩒}<˫>~YS>-x,Xk$, XO)Rb*y{{wKްHP(ɤ@XQ½ Kvr t"Qch5nP(Ebh18o3HF%S69k5Iln O+%h8Ϯ0 Xt>k зo@p B{:_B=/iffאwIƳfw86VC'SAe6TLmύ^}{U)$?odX-74`oE!ЏKJiw}kJHUH2̪G]\]!@ء̥2 ՕaG؛{/>JG7,Az6lMo?o \q-_h.Rw6* %9wXЬ6$v MY"hai)<1Π <0iqYޯ^"3˝e)8kdalH#<P+#;V=̰G @ɱx4[ݣ\<Уa@#RWȰ8uXUVCWWK`FuŹf꘬+dٻ6#2~TwU/!"%,MjEَ7~|Ȥ!GdS1 ?fF=5=]UM=WWU\^ \UҌꛄ+Mgd(yHf^%\f]0lx?8H/;w[ z2`6~LGb}iV~Ƿ;bquqaő$n媥+yywM̺6!QCFok\\锜.6 3ZDr 7(1h"0­& [۲ŏXMy2ۇz|Ûo̻̟?]͟G俎?g7S烋cZk+\T}e!d QM^ᘅ5>vh>v*7O-z_ozy<v>bBn7ڵ:H%^z {vxFQڒ ŠDH9d1{T\(ǘBemf%eXuQ>T#ZMI=s@L4.$^BeKpD\ ޑ r,QTQ36kXiYo#R*ʘ!F((6VfWfQhWY,Z'!ڬѢ]>lW%OX`Qy,%4.*%k1UR^H%P`EoxbiTB5Rfȱ T\PDǜ/ R8WiVAX]x,<ˈp^ʌ>.x;OO&O/_̈-T0U+$ք6Y;u$ }aBTBؤ3BjtjAiUaS[ KXضsu_VG0 \P8UcV#j v׊'bkM5eD>@1IK,IPjtYQL-*_L,mmb,H` XVeE2qd>$SD63zFcŦW炈ǡ#qD]+^ fIȃMLA!\j͠CqVd6-ضWŢ,[% hE3 6[:SlI+@%Uːg?"~|ݛ|Vɡh"8G ̈P_aQ9"k h-5JG\<. 6[炇0<<5l*>ULvwՋvOGPVWTv'Sŗ.4iOO}RS' k V9-Qۼ;tc&S2rlStF ksv5vb_ 2(U/b) bTHڢJHIaThjGQ\ EȨlt>YN dLR2v/3*QmȬdc8b&5nz67Χ>0X%_²]OVbG[zn!>.MR |"E5iR*73}6Fc"j+ָ1Y& o t Sd2٤ ,ɉL) )Ȩ))n2ZfY^bA8/)L(2D-( Q-+5c7X|[ChC嫙n+1 oBTj)tu!rũ|1Igث@%zX]ut DId eIR՗&aF|"࢖QHnr=ֵo6?Zc3hzB_/VߌVKYC:C"t3mi dKd9OӅL  ɨk8<(p <*1{ R-N@@hDa'a!AHJ5ؒ.9 Y={M$e9Ა^2@s !,O6Z?g@c{tFɘx.gR[Z0ǥc|nc'7,=Xv6Ǒ;"6fG[/G72_;muB!+;Fr4%GR(0ёDɤ!QK^ GPZu:FT6%GhSE$j'XA0ZY/IH\Ԫx>䨙8cM/} hq'u@.x؞\nN3Wf13hSp9D!yt]sHE:VMV/I਼Mw޾f4 %{#]-_WO'H$by?{o߫_2}d&V a_2<PH=<}G>*=igNv*uηI![˥᭦m GBTO69OF6=#.#BW>k z'5ޓ^ϲ ^[Yu29;ֵnyn#z!HA}d-YDV;|~?vЭlN:`\NW)2in&ܜy!3!U4ԨX@(kRsh[R[Db1:P>c TЉo)BjX -)N4([rVyPٓΒ/5(L<7fqd޹^\}\ ?s mk6>>KG=c[^BAj3H|eTQmRȦVʰ$ %~ے N{WeJ뜗>3Pv&z`v+KDLڳR(Zn&~ ~g}z|3ݖsVd)%Ѳ|!PRU`g_MAWKZJ Vi5{*coԠ1[esG؀GDTl x5!ytvs{#.i?@栽PD^@$U[h/tBKA` IJYytͷؔG@1 ov]rp cꎿtXG-=3> Yw^%Jg!A& /J) Yu &5"I2arWPDǿL QhBۄ2ښ7)&Lcf4iHz5~3Uu2{W뉮Wf#V7xW ۯƳk¬8SrU`Fser&㜁Fx(kOCvg.0T@ 9u,^Z~[ILaBÄlxi޴^_JsScR@4Y(A?_w~:.X>Uڋt+֥+yZ[#T`zs?qv?av{! [MT?oQGjrhw>>Jh5l2KFCFuv n"(Xvv}]NN uzD̾QSGoHVD8gDaЈopp.IOw}5?րƞQ_oGiNcG佲9trtw,0drjzuR=6p9y=AB eI>b]Gv's4p?mX;Tt C׊V,gd ڎCҰw\q3$/S{(eZh_9yUʎv58]Mm~V?.v~b9['aI̸!ʳm!=:_6s=zLj.yP$!GPetWWWWWU׃9%s4HZ(Pʍ"VF 0,"#aREbD hJ8ۭd\ H38. K{i>Gx店R??{iD)ك5&D P,l5"Fai.e9犉Skǎp8[}&򰓌҃C,1=%}h`|Tqր$e0XEm4V!zcEX祶Kw)2BTE:ʁyKM)8FҎY [pa0Fu,vNs:? 1>т+ldr8ZɨJ^ I^i>=uҰ,K=k8|]++(By Ex(!q#Ƒ'qCuo+_u wiPWj-$#0"1() JXŽ6BLlW%ǂp(E3Qo6|zSai,eu^S0!*+BIz3r=?f#D@8hg3,0W2+'rG p*2~#uvql 6gÐ\y;<GB!z3{=*DOw%XY6X 1LpXqSQCiL]K=E~%O<ػlELiĎe ?)"W&;6iÛ qzVTÀK\BP`!CR*G Ya%* YŻ9v~|fQXXNއ`E[_NOOfW'DHIdǮU+1AH&k8+. [Q&eKsZgedf_ sŃ׳W`x[si> Fq88(qK-n E@j_g 3#wF/BM#_?e0qu06qe|Fv`arQ 4YCGĥb,'gNJNAЩl0*gnpu?~~së_}_?o0Qgp 00 .y4 ЃAAꗆ׬1RhY7gŪR.ˑ~0u)v.$$sV I|Wr˲`DB`FUlZ\ Z+ySnq[ ~w @M( zoZtZ#VOoE`hzR,H̆ic~ԁZ\1OrɁŒ\yE;{=>p?2.X 1Rs eHW`J r;{e}'^%'>)>Mgc_L}*Bl6ƻG/b>8|w.b^^I#?Oj,kYb(eUe^܃x\<0΢3 K9Xq!Ĉ:b$p'K;`8HZF((V!e+[X1HDk1hFs+%c3pZK%䦸Rk *MW7#8czrI.=oᖔe"{$W얮S}Ygy-Jy^r#he]~e ={5=0ோIۜNZeSN݈;IvMm.=/LxC+oAQlw U?ͮz2^SV edL]l3f|=z5$J?}ܔXhyo<)/?\] ߁MI(4Ӝ9#sc9Q(4)æ8/p6E}X61⎰toSB`), /]@TDTh/Q!++~ 7>&Ҿck^Kk]hzV,^BL^?z-B?#fT?:˟zUk!z VgĮRxʟ Jr\UVCgW%_ ]}=J xF HW \ J*u ՗®0j^̅N  ]l{H9rxfBB SNOAO_ٟ.geSm?ܾ5)TK5 ۷wPHf?|Op|R6}Oh}6}?( bŦ1jϦ˶=e$gĮ?vR̟ J2v*A)zv5+"ω]~]%pz.*AD|R]}JM~F*aW.C-ήLkdWZep8.2ɸ?]4%<,>J./n ^uP3p*1yXM r.M~-<4h]xa<:O8ʋS\|U xMRIߎ2CcX"_R 3C .+o|1Wƅ*j~5&ބQTmWzfܵwѦlIY=/,i%L#*W_Ʋ~n`63LuLA̻3袻r뎫y5l +JU RU_.ڀVUh[T5wm|[$iI&UޤZMJun, ~ ForSa>9d&>Kg"8YAs"'q؂ǰ1\b$vD)F G3% ҧ14Ey utmWx8 nC[y4cH=3&[<2TY^ &sT"gz0D.0z kTk6Lݨ{9N|7~ڱ\xN(J)d=~HiAGJ׋|(i|"9y埼&ij:Rjsfȵ1Oǃ`}Qv7m{̮%[n K0}w!`_ U c@W逰ϥT=TD;6 0XrL4_]?.5 z oC63g/(At@~B\:βwW7{ƍ /} HU|V:٭:'.TlӘ!)45({| їfm~}?ZW}EleVVmc{Co~1N܅߇In ߖ֚ukW[ڦta#gBZj󂒠J_NEB˩PG_PO &AW3|pJ# RJlr56E`WY$QdFAH.Wȋ^G|:bKi_R@z-wj|֫}6}?_rhJ0ţAeYai4/iRp;P+(+pWyQLzՃ>y 1Xe%-9֚>6P'Rq9K}R).T.HRdY)o!Z EĐ k>) Cwsow3M;L;ZV`} h;}!=>mT'#s%UOU`٥Uȉا`KyʊqJdKml1d H \$"0䬳 %vgԤR* d۔.} {ˁhM2GwV:6^`g와_ClrSǶyhj[W7^:]ZKD=8֑lHmD8?fw4=?oz|5ۜnҝDKZPW]vd| AC.EHayj0tټʸsnWoz|rAՋW޼J5Ow~n_۞w4Jc(Uq)أ(ҽΙsBfA=8MHLȝΤ֨#*TI 9b@}*1Y{>}`8m`)j;V;#g\>~v>Zk芓ߗajD=La:ikVALW jRA@Bz1Z=P)Т 8ۛQ()/ՠJ=wW 묮n*yh^;x˔ Z] <;QO3QWg rsԓ8p4QڷA=`Ln$a*ei|7xHWg d}[A/cuN_1_2gSC̈'Vk_>9rٹi & ߪ(geeL*"W:U{x1nzL? iD#ΏBDlrS1L9|qW2Z$ U^O@]:mt ddEùmDj~0 KcPłFQo}cQF9D΁gkiYCHtheL#43gga(TG>1A,gOA1,¸BkDmu&镛~#uJ퟉lc?l' }1IF#mY kcniH ;RT4ZT5r[]Im 5䡰ȓ 6r5-^8H@lL\3:$MoN3'ihW,9 zc;Rh!M`vrF$7Є\}4 wf"CN?|hfXo^-{Q%1t6ޢ.:aS$~Mg9!iJ5ln6ۃrmHul\?x}=;fm!F˱\O<8;oV۹]1~=SI9w`'=]u#Zܶv,oi#42fx4YLϋˋ=Z{^lm\3ɗNBZұ"xJS_yPnO  9V7JZ=j_.FA:'v/{7?S/?\7۟_ө'Y %CK =>|-krZEX zU|7ף47##S#c":\psE&PdW.i c <*H32v$t&<LӪzLS{X%cumE;L`2<eF,Do8;=6I]4 {6um-/zs3L?~*$RlEaҤTȕuSO*Ad$eCj駪~x)$Ai$12?dT3ҾDm*$'8өUj>39{$WݬLvD tO}E6'ZRZa'1?^g*p3#-I{4t([?|wRV IVHJK*͙0]O߃gG\Q7@:YfvTnAHP⿳Iw;R"(H ()reF sE`.HpvG댜dd|d^h-%FTe{Eӥp(^˗Oig# QAGo}t:+&.jҥfFʁpT;Go_B3M4ngWvX{aT ]r^y&H=+  d1EiTS?}")9U6/٬d"={#͎#O5qw.~N F~*חp3 L, + ծ`ܽ2הB>;zYZcni7MPYOv(9,[Ԗ>*5lKwnMO*b)hlX=+d@"{:67UM4%lu?.sHY8 WVY(oXȍ,}qV:$wEx@Nm]X2\` )ltYb=3WeDk%E#sld|iյ\m?-yME]c4C]y f^9T>Sd0XiZnF-}6>G-IF ~U|,#|@Fa7ʃ~~&Q_O h!<֞w{ddUv@Ӷjm!d LsjWAڕ\%_JBa^P׮fjWjPA6_*pÊ6?Ыϗ$*ZUT̺^}RG;QDd:u4rQ" R9_s>2i!ыBu*JiR,&%VY Cܦ=pcp[,Fk2'Yx;=G iވT}8{9yX gc1:%Z*{`) U6XnsM; ېTFI+^ɉCO$"pU0 ]tE&9iiRC8n&EpW|r`;]wlrSgo%Ѻ|Ƭ݉'grW̆MΟhh邵[>#ʻǹA^rv A Af>* mT2j49=+tK]G.Hz]38Z:LBk9{$4 t& ;㌃d!%ޥ,\j|,|,Ymv]qj󰾵؝`z3 ߼Ppqx1}). "EP-#IQIZdIb9K¸L[ŸnS@P= )ؔkmH@$f~?G񞍋A` tHCV_ IQ#)8bq*/2n.D8Ws6#y(M:EmI]{FFw 5cp`R2d,S#0#VPC"M.#88f!3 hG9AQNq )(C;:Hc2alZz&`<D&""͌CICHuڨSpAc%/dED0p Qz@ łKs1e$Hhq͌ٚ"yU9Y\d&%"όIK#ÝFL95đ(DBd#b"S5pqx(xM:uy-/#7Z?cCNY$/m(jNp8kK[ nC|YX&$0 ƘcxtT{lI %ȗZrV:Afx e\kd) |M2^|W& ނ}rwnnhVQ'Tts7 j xu%J  aN(EKf-D~Ze#P(XNcG8UT2wZyfF"@Sξ &$sJ! R9ijoYn/o{ev@2ƛ YkW0矝}"/;Dԇ1~rb%ᜦž)&Fai)e%PRkǎp[(< 2`w~%fQ0?*2+*D@8kr VQUx E^jzGʳK8I ie6߇ImhpQ"Jx"qDPd.mΎz IQuj4:&u#aQ%*dʕp0Vdb*y{zl!»0w2._<}zAai,ss^S0!*3AIr˯<AșO>F;\Yˬ81)H n!]H۸e;gʄgP k%yR uQG/?}ߘZ$oAKֱdyc-0 $ŷqbƽO/P];I6We=EXnޫ{SMQ dU)(a@2NAySDzW&뙪 ӫJ`HVtb ք˔,,o.Ϛo`@.$Y0c,F秡})L $Q}6m>NJϒ) qVoAwBYFk+]-j˷ŝ>M.ד~u6O^_'Γ%XYc.ͧA_;oƖ۱3 ^_kws3 n? 1VդWה,rU5de5U X> 5FQL;pпtu[xMa+k%h}NuU_@je?LHu8r +"~{6 kMdApW ӛ?|~y~W<{&ӳ7/î Β[vn¯;5@2+^2sHe]׳JE͛;y}$NBnT |_]6?/MX1 \0 B看W]^Q_UlNݥ!D؎ Hq)>jLۿ|S <b=n7'|")x}mҴۿ.W .fޖF\&mԆ2xgPPJ6߮.bfNW6ǭʚP LlJK4*5AR鄡v.w׿+v%,X-כ&Ky39LftOS!솩k6 M5M!Ygs*!ׄ.Da>;n\Q n*0VvMãn8xhP6::'e0~TOѿi'aN6H|~?G+~PW;Nr|HdTQaI{+Oo峿pc)8IWhz{3qq0@sRz_}ouOE'ŽF+f#|<WԻ'R1NJM=OUS" yc›B]Vsvz;i`񜦀8ou_g6]C XGvãIjm*oAd5yCt^b=ը<ǥ3ߚ.}͜2S:qw|sg십AK?̷=nG `lB&هfknnzoNy]|dU~#2?ʧV2-I)BlZc[gmױ|#@ǚm >Wf霿O_ RKpR3K> e V{0SlK+`R]vP>˵KzްqX>iڛFmAGTRJmɌ6>imw1Yci _~򚸹 ME[Ckeg]rg@Ӧ<@#)Q.E$&,z1 )yW#K*H}f=b(حu|Pąrzl[(+#%#z/zdM˶#ԗw?Bg6weh{{]gƿ+H. ;ܖo?UnJTERIkLMڹI?')&ɏdфR; Q )!ak7=]aR~BagTmG0V\ 3$FY1':YTLoьR&5!D!e@,zg՘AHDk1hFs+%"lia$ =3v?nܺtjn(z /:_FmZ5tC}w4 6rIWUovvټxA;Bi_Yof9Ů{%߾0nQܛ!MBt62ZqF54)|Cބ&vr*oڠ^4i봼XxΣ8ctgwIך[w*%ky=q&R축Z(覉LR Fn$HRDNK,4FXȲ#:A,,[^,[%#c3b䂥|2tQI<Qa8% £ʛJGCJ9ō8`1*hzʽ3ø)a1%[s6Y&|0Oeޅ,O{^I5Z*%k1 إԌj^f%0Qg$7HL{Y̳f?KUtx^8|/?5b+ώ~O^uHW_f"jCXGaJ˫wkN;A@uQL -K6&XK͙yn93>53;:iyRჴ 6ٲ:y.q\naO3ȳa~(_1J? :q~2yK}`ZtOnBr& |OW?c ^u֐+@h:bBc3+u^Q2G:CҊ緇p >ZrZE6t+kL  vi⛍9&&VQl O=cF@X1b"P#,8ZIJ{C" ̏\fZm*u,?LZ/>&QT>>CWw \c|wQ\5>x?~Hھ,qJ{X CΠU:7TPk,56lflWǬYۘ6Y !戙HQYa,Xkt0%(S{v+6_4"P}@B \K&)TޱR61tm0ta>qpa#aPF.cY"ἐDPQK"PAE"b );o+ vHT "rroDe^ٚywhi-DV\reVyYfO~!NQ=pTޜ7` ks)& 0%pKc@sI<+2 %Z,B)S2K4Gm7 ,0r#clF|\%fXsR-Eub˛7KsnP'ZK(IItY$R:"*w\ٌv8s*桠v6If&jw v+q1׌wƲ=> 3b: 8$"Db^>r!c20C@v4H$G(XD Ɍٚk~) b6ifD"vN SẠ`2Lc̜q,.XjKAc%)&E+ kQz@ ł!_vf7Eɢ2Ivl2`AO[hHJ'-aKteKpt[$HlL1r$-*oYzm~N[lG߮2Ou`\/.^gkZ/.q;\pqz="wtt)/Q@\pZH_P)Ppx*xؚv bf8UZ6]n?G-R ϶n?.sy|fƶ;$J+g AMӧFfɱ!1SOIɥ$u'ԵtIlٽCB2F )7cR,%a )R)Β$\1G4V(KADaD)Ǥ1(| ]O42QM*eG5qv8Zk77A?~Yʙx[%GiIU)ZszbCmӻӐՏ,D>!gG/RւQ"LKsޤ "{ JlRYCJr"4&Ύw| ܂ 2Oc vMV@EKP2Z@/ߖM~By3pXvMkK!znsumG"$H-'sE'A`{T1)٘ %i6xBR  DS )YoZsg_jJo\mm Av mk厗fh m.6Z_NJ IXD7F*)|7K6.yA{}ojCKHE'B*!oȊmB+"(0$re ";sz12,O1&%)p)mOp,jf<k%oMEH?'mʖ1&kp?)Re|'ocX[\Oӌ?3ҩb&C$kvEZ)U9Rg8JWFXD팷{o$f4/?e pVؕ(o^*!iΩ :U\RUJkܩWըR[+R?t{ce)9@0{*Gϳ OhŔW~n0ū?6< yl* +QJ34%EhWUlbUn+Nدs(;q3ēNQWP{@62NB>\l`$-\rwbZ51zA?1d_޴Ty6UoRl2ڇZCVF\m@M mRFz]bA$ )(iʄBd!H 9fG͏{.zMUuݡߡ/qL%S {tA_(d~'p>O>^7_\ xH&Ap%y!o\N%2)?W8Wۓc 1$ʃ.k,AX,k*ùR{AG1^-wPl*Q"NsQ@!e_TN43ң̞TJ(QZkHv ~rJGjP,د{3 n|lV_'WgʞʑsϘOdm%sFP\2*^Mrmk9N{im]v-)xW<CɻjZdJQDvnnYG᤬cC~w)V[-=S2/+}9dwV_or9tB\G(ٷ  jҍVw[#H{Ͻs{7S2⠏H,F6bR4.n>F<]N} a51dc/oquş.>z.H:{g$2Qbi9%AIh&!Q\;`x#]dtR! E*Ms-)&$H [:pJNwP^*N1{j[b?9eDm3ox@9) *P&%-2&!)p)/b*`= ;DIɔ;ixUg-t'p@M+r< Q8 >{]BTwA{wpa8{YX[rL`)Qe$S8uTY;oO2[XW]56цPmB=)j;u dKR ^ x&f<xVh6+N*Gm)sK"jL(?),_wf{R֕ď b8V| >~l@{7]\A]y(Û݊.ȏ6%RNhk}yz i?qMzSu2>LfnՑEWt7H4Eݛ[Ǯ[\}x=](FZQhEՆCzQe͟ڮ[P6Ϥ݌wb035iEih=_6h_8mɺ6R_hcg۹wf97~³ ;>g&rТ\ 8C$B)C(6Hҍ!jج7<:o]Fh})*Ik؜!zVM f"AcTVXBoV@j`,8O~9ֆOnM7VŎ[ݫO7noڲ_xXu{RL?O/Ĕ&4V>SzKFp355[S355[cO==#e=/B^{ޭkkύ 9կa`^.fp%N-ө Ys.fIǥO_Vuh\w$mvЯv @ʼx% +Q=I\'e+Q=MZ/SiR*wZ DԃJTK]U`A}Shav1alB_hu!To٣I%^@p!텀w(C(m&s \3 /.p->jfҐ%veh.:cK!۱rޙtΰ;JjVa:Vg?7]{ Ù s3XТ֦J=44k)RQ 5`^\eӊ"ٍƈdt3(hQio+ڻkRE'=̮3IWC q(HH$k^K ErU`gUq=H`O=R.lɽHW6 X`|ઊ t.pU=6pU\ER|*CWUZ}pU\EB XWuO?\9۾9U~ŶvӵL՟fz̛|'N6YKm]]d0Q١3f r,!f juVǧRسY\eqҞ~,Pb7jb-A,:\UqQ \Uiﴲ tWoD' $l. {-ůJWN9+rJi>z\እT[{w,H嶚&}{jJZZV jvaZPqȗAg-o=raJZyrK H;7f<(g jo-i*v(P?/k.wx3)͚}du@Ya'̦lNyJ[zY3Wz{pת CQb` r{ ;Qyy+l%we$gD*RRS`H%!&84J AFCL"y̿e~g}*=׾{Ҏe̋QNWR6Q3UD GM:7xB8(&9TeъV2Y& !iGȥu2@YBQ)PFzCH2&%k/x 6h!j%xTZtB69YC% .ijB<3HFr* | )t06HVZZR)|"O5uT1-ZIYd*'E3B&YOy]&B2Hi}̢q<6*DYJ !`(2dMh42CQsZeHZ r Æ.V8 bmV^BF-"XCbfʧRȖh (x\z pBb{[F]ԝ' ڟc |̫]dGQ+i, Q .%]nGf0N1 0Ѷaz#VFT"%u7DE)%^w6kɺqDX w Jܲ2"JZs+h%)fUzǪiHVc'gpۨ5Dbt7s <+^jm_|\Wh=1e&&ZEjTR!C|R(.&lAgY z4ʔS,x**3E{3X9bvh`"Xa. W4nqLҁ@:9.g0!./5GȳBoU{r%Ō 20[|`%d! H}UTUR: xDw.U[G- 5 1 Ü/ -&lJ58()$ؙVNecT"VP rVљ܀'J [e S1Q3F@)  UTjԑU= 4}A_*BZ!uLD+AR"qޣ.P ͩ]#:DgW( 㫢 J EHH() 6Jabu`a`Vi>n;@**ZPL.HZMUzr AOj F :"/Q%* <@ "y:HvT. U u:?qq":t`2GMۀ8ڂv UgGeS]_}(Ӭ(aH5SRe%)$ CddpƒFo/D>Vw}2u1qWCjED%VX@;KnCzhC4*f7]/룫Fٞ1 ((B; Q#X|.#m6(Ze 5: tҫT%l`CFڜףrF4f,HZnkT/8t Dڢ xd)Ψ 2,PM @",p0",0Ljt#Ϣb@r",UjkN?#XVHGY gi&0 ը(= PrVv-܋*.pc^*w (afsEB}׌_N%TF`]ҹdZ`-AsM`bڕ s2b: 暦LnKe QC\AtBÁf=0HSZg(8v`)jY:Z kH]k sQS8-  dB9<G3wm  E zXRcF7JrãBK6\CD[9jQ˥Pt-STaH>Kkp'r5lGoܮİnhl*dOO/+bE_h"\x>W+fP)Էh.:@ ]ɢT( ĈOZ]`-0 QE Yq2BcsJ7HPa֮@jhQX/1Q˃ >(tD$@?e }~5(Ws΋|Fht/5s%FC56 #<,~˙ JfH z΀~Q2v]O#8%0ƇJFx>Q'7ksڌ!$*V/:BD!] 12 `m҄AD` Хe䦀)Z4&y]H] S D8ØQ`j?®nj7BŽq>?zNMN):ŵ@c\P6`oY>͛kx! L ru`ojeXf'.=}>I<g{jYvgW DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@zN (o/v)-'Uno@=}'vr=#'1\r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9@~32BZ@1r=G'chr@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9݇ ."q!\ h&ܲ8\S@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 |@oVśYմu{}پ~uC7pީXw?IM *A=2.R ׊}1.h}ƥَK߱q{M04h1s|_[:\!J@}+qǩ/Y[~D/o@Wvu{E+y_= Rç^{sA﫫- 6o'U*wՖVu &ԄC-?7ar:?[~4c419莋eA˗mIًo?9:sJpO&?㟐dbr2ϋb1R`*}]Yf>xJ5Eg~NX_E8"0CY~\^2 ȡO'=OggVOO~e1\2t`ao0Y:WK?jNK!Ծ?p,{rW8tUYx_F(GK?+PO J,QgHqo~ٍmӛ\X-W} 8Oû|q4ޠs:*}߁!}^;mPpq}|/6iFY_Y' t>5w9-#l<6 ։<  ӿ|:7y ;?~ݽavɰZC6( |Mg}]piC[\ .vCG 'd_]ͣcsdzw]5uf0-M>Lgx)_V>OkݶnΦ Omg'E8-b# &g\"깵2m rg30[~97ofgOoa+.X9e^-qbYh[rw|xqy >Ԗ/ +*?n/hs`QNy*!\ç0]x90gqI秋W;@k+h^/i5o[ZFԾ>VW7B{qert!#TIWRݻ?t+fyc5N;_IIDn[cpMߋcA&Dlysn-RN[jUUmdݦX%6o *c*چ>Jbep?orqCj>lmq+\]Z^Z,׉*7j 'ӏ v>-o>]:އp~v̪>;"?Z6e#C %V_H!k6xH( ļrK1ays/of?x'˺ݮVw]9se_~.?/_C6IMWuydzeYu}^'dzeX* T@3;ߛ-.z<wz'vUpDx[}8G|NsY|å*ɥ?:Eϒ{Q+Qo2JD3Oώ2A |26Z_E)gp;Bi%۶P!:< O!wVl?׀c^fVS*S{|m!նAu,z fOx2LOU"Ú&܎èPa^~ɿӜ3>֕wAҥFA$Pef{PWPmos)>'nȬ'P;_30mVF{O!mJ 7 &^-7:|vr .aG ԍ\}Y;NZ(  ƚn# M_Z~y,0R}-$5d|2.B\擲,}?W?n#G(m)6e5Fd-lz[ثbҀ\b$n֯2]@t|Ӗl/[svv}3# _WYO4_0(V8Apvgj<Sw~`js9EwG+vxwF6 /%?n8.pa귶vbNm1f%ZWn$ѲmeLt#7Fnkv4|{ynk|TΈ|ӫ|yE8A*3(l eBo4gǔ1V72::63P<8</Q #)Lgp'!"sx Z %hVC:QzY}z[گuPwgq+Mv u߾v~eY o'ۅGp!G)O}:rkXd*^ON00W`7;-[᏿x{{wLjmҏ/]XFNe=ؒq6HB"O1EUȱ\!,ALG1 )MKxQ̧A AIsEbP-EXDuV=}J8\Nh?]:#ZV[V䬇Z=ddAXlHdlx&jx96+%N 1 (F W8XA'Nހױ݃;ݻgWBS 7S}1&yBiAE R&N󹸔q.@N'HgSG Ĝi O]suh]Zuzfuf3m5na{LMi. ',x轣FL/)wNIĽ9$~ǍԒ,ICfdǣN?|jq zJEbw-Y5b hGFqqA<\ٌ ƅԐ%y.QykMm18p;0^8 ږQȤA#\9d`HcrBΏȌ]:Rʜ }[+|2j07m)[K>?Nv2-(+|$wN{tne'MPx8c1k 2FI'Hˉ!l& 6$"iE6l#&q[-JЉE "ÀT㊈BT Bmw%]@ྍ 8k>bE!$xEopPNBD Hy 4GP[. Y)xtAmB%8'$KN9btTKBd!s9ṳl0~~{q Re.w5Rfћѱy2)8IR˕K㦔ޗs f6:ri/kfֱljx=Xq&skӢ)trp  ;$oqoS; 7>Q ^Ƒ)gwP$Z|Dg‰Ԟh٥ p2oʠ,EqM>J?t0B+z$^ eⷊp\=9~27|͋o^y׸|#0/OX/b3wZ^3KHe BᗳOW}=g>.nGe؅*GWm-Ca_AQ.lk-j@p S˹`v-~.%T!6~@G@sAZnM? ;P7mcϒp g}w ^vSЁZTY3J+BTSS٘01neٴީ{H r8ISȫcL4*=ZD Ahz:1E!T'PGU3س*WVŪeQיO[V:s؀}[woƳrBe6s2kYV gI bY8*ڧ&LyTǖG׈(Si/^Ϻ7{/؊Z.$! "_QQxzLLܤO; /.&ZU87bOWZW:aPn8xg^Ō|P-奛kBФ 2dAF4KX!e-i(4؅/ޗmnw&2;7ӳMk&2U¹?tnӜw&ǐ:*VG(8#/r2 z6 Ckxp#'g'k.eݣKЛ?F+" n!ae .l>*?h:ZWDR?&*tV:ȅFK:f4N*d~ ~⫿q8Yk|߸ϧI˪;}]6 W^W^kZvZhH!%tE#r :4V|H:}ÉRB$ (|dFۇ>#}*</݇>dS*"xM N"$cR`5s0Їƨdb@51'(w[ K$)P5vLu茝~+)=h/- ^~ry@2kf;YaH[s8J9@So[H׻'f vHe03f'};$0;E6O9o:w<2M{] 7,dv?tOirկn!\[7 yZ l߼k>X;jծC;~@Z;Ǎ'nQ"tjQUŻ :pҥՌ $/VBh ~xtsl@W{؁{'kO6SJ: KpQ 5A W E&趘I@ -.)9. 5v߇% 8 M&xyG31QK ; CM"<Ǒ'$p؄dsD ɦO"$`KcDT)ᝅT%ƹଌ e1)$%!zLO4ؠ,8a#p:BI 'n3kgl]_i|@IHރ6 @oѿLk DD.f .+Vw/wM~JLriNޑx1t? Fq¼&b%lyB]-~ HG?}6~ ;s3>!=cIg 3iw9yk>z͂2fzRL;^tm'|L_3hy`@]ݧ,0nN+(d4m\FlVSqv—sY+':1,5uI(!= m g2d MAq hRךV8;*YS>t@e~OA5R¥ ȄI ^IKD>ʦ9F:*QQ$߂h>C!Q 5PDp _:U95^\hU'gg/~>Yr:{kN䗫ֿp7x~qrXWG ;N]EH)83Fs%;KKK(ˍ90Ի j`VIՌRBr䈷:$N3 8l :%*I%x`*Қ9;(fӅ8c/] t]?BՅ;%M;rwDߎbO4v0}56 TDMgW1SId&T^J1Bd`B]e\C7:{60ceɥ *oGI L#x kbln4 .hbܱf6ZZ5zkYK#(:H"pKM9ɕD fdoT;-(!= uMK GQu$ꨬ-]N,Ǣ,; +@O < v! f?PF W+|_}U W*|_; P U W*|_變IUI}ZU W*|_}o (( P*U Wu480PgG88sgU3Wq*\ř8s !er9?Lǎ3Pq Μ@ոLř8sgU3Wq^g@Dř W*\ř8sgU3Wq*\ř+\ř8sgU3Wq*\gU3W5bř8sgUXq*\ř8sgUS~DJ~BзC⾿n.=V; $p]ؐ\lS @ WUܲHwu.\<̥^AKY+6\]Aa[-_zܷ]|60\:ciUucb >J5'@E=8r/K>!Y!yF`0&%34mowG2)FU6`h/LN d<]]Έxxn;n؛tB ]m3[#ĂrZgK&*922еRم-#,Se,>"2T*F1:᭶&\KRT)ƄܢP%OJg#g^_FrwdM#l ݯ)nz/Y ÏN^&S魗{e*5`o=)F_G Wa ]A"6 h2h$jGROx9{þ8&R/ Aܧ٨3-Q&1ʊ*Ť>^CXM(a8dchQ2J{v}@DJ(,A x?+W㑃ȵ0`xB,(BsU0F@xA,$btǫzNwoG׽e_*D!ij7wia0PaD3~g1>fc{c^Cٺy!zMsVfmfwv'_o9]sy!~lLL<JP;mG-Z*}Xkot@6Թ'ԹB(%lCȇ]Y%/9Nv|htWS:Dfu ,}^U'ghach]빘+0z8l826VAffLҬ)ZFmg-]&1a7͘-MivvBsW~`;ţyleDl;OYT_m}&wrɬZO:q@JgZ#M"q'o21e ݡ-B @Q83d(0j\;5OAI{`WqSeձ4~.~ z0£U'aQƓjcj%*gaj`Q:[J|LUi`C@;$cNF!y#MR=rWO#}个ܱT1IKjb%Zv8(&y%#[ H,9IG#-Qјꦁȧtq}ˇIyEj[BDq I@ )gLTP -t܌rlfV+7p/*o%o*nѽBrxh&WTE;$Y(ך0꿞>Д0x*p#EmEZ ځo 8JIB&:56n}S e ' Ȇi5:֐_āS {EQwV>ʭ@Q7AT)mE QJ+>R&2.e 빱1$SI!-@ezYxF/dR,9L$q5!.JrR!.ąVS,EgfOr3tQ*R3pvH)OrsƑqV8V"?R!Rb4Qv+?Ѵ Y8 5E/\-B΁` 4G,C:C8B~a"OjgÄؽ~u63~~.δ R6C{?Vc}Bo.sD"Nzom*xQf!܊+{~!6ԕPJ'9yuqz+d׏s ΨFg q>B9agW&N?qg */gq`ٵw-I|FDgޕmlpg4D}nG J1\rwKxk[YqGH>7ZtOѽVlu?O^fr9q1kbArBYmI6w!]+@_K2ss21Q>6FO6cй*#Wdת]J%nq~2>]<Wx퍦o*WX;)bwL*'5?^%~/?9e_ߟ' v0}$~~){`~<Ӆcٻ6$ %6#! -XC3_!II"e1g]]]]U]깹/V_^ ԥܓ]A-nL^ _ʁEo !5!0 P6͵`~%os |_Ŗ |!B)-sL׻`'bEd ۝QC{NjrXoƨIOb*3XRh ,1!"!0 ܷΨ{tH- b=&I %t^FGN)C% N"=8QQ QXFM4.Z:*XxbGX/*څ))f׫ʪnD :Ч pzV,HicLBZ\1OrAĒ\yE;Lđ##vE*DKIBaw8$ع2HR$'0%LJ9hh=w{e]P+IB [J}2|Բ)q&}n@eM`2ѩOb*1A><>+ʿ/K %Ƥzy9&a0JսTYS/^#W/.Md򧢥}GFq!Ĉ:b0b XF3 iU!D!e!x0k5f@{-#chnD^FΖR6 p6)._frfsy ?W-P]VxOVtu oh<+ڬҍw.m[IMO Ť/W]Ko-Vۮ\lޫqn]7Z^Fɺ7d4Z^ts0nwH-ao LR0^\zsikY~u;&ۖ<\}US*Le˭͟6'\-fGCK>E0/V``a6:T_,yVe=ǖ^\yVnm6B` 18\a@8Jc~+.Y~\ P{5|.uw.RMu<`%[ % (#`/Y_>BnTv7 d7N*:r!ip A~\VnSA .:OG*iAj'R'Վ?>J0^R,,1}B/mzh1SOE6xOJ%셯7`p鏧ٛci Z,[NkkUl6wV |L+0R,,orS^;9߂I(4Ӝ9#sc9Q(4OTUx*A4c-'1FB9H /]@TDTh/Q!WxTbuha8ō8`1wa=ha[0 ^5ZOFfd%379sai>``oL g_1EFXtf 2xl7ÿ+*Lhr//* w%44/^b:mcw4s10 f1gr9+k *V.ڑV0#)6gF\s6;˜GIʧ; P:׺']˓}\wZG=#;,ѳÀ\sþC;(yF⊤ وD.E\%j>vqWߡc"J+lKz.*QKձD%ﴫQ\1Y+QMt 24%<,>IW/X t r(&Xgݝӵg$xb7Cۛx^}J?' LO6 X]rRÓff >ȝ*ZbS}8/aJ<5!F-S萕,2Q3tU@X7ܚq $+~Yhӓ*;\ѥ;Ƕpp_ x o{"IgLjn3K.nB5ۍ/LS djHyIk U cJ#68R$#jR)1b1VƔi\j=Ot-Kd()g0{3+>xקYv0ɅMThZrm? G ډU%7A4 xPw.S?73RvIP +2%dKY*B*J(J3rw$x6Dz6KTҹ;GwTYuxin ne!js0EN93XZ6>ӝ-£=$xgiDX!NtSe`60!S"a8IXWyW }CZ;| 9q ؈t_mh`a>;Ki,5Tm>vra42$|sDȩcG8-J!q鑗)G#_pqZ~a?h03[ fNB4Y.*j `ZP䥶Kw;KHv!J ,b壎O15p0<xE+ąPB%[f:yW+Z\3 ðMWɨ6ͺ +^C$h`|cE )`Yg,#הJ*+FI?uXGVȞrFjw#R},U [uk~Fv{Ϲ%H*D?8׽/]-D`%X$.0 IY>f ?'4wy=y#O<؇1_tNc 9=)"ct&;5itBpp:+aYiC(Xf,tH\,J\se0,`a>;JO2SƦ$,̾UWHmQNDK٬?=u[]?%)FC?ᬸ#QReSvV܌]/ŃW׳ꃋel117ǫrn^풣Xmn2%C'{4uCInfYFa8*d`tz>{lazM6 Diz9ϣI2p=?ç8;rh eca8s~{{/oҟo_]w\zuy׋˷SYp)h&{{]W1RXY=7ŪR!˫ݛ||s1Sfq%nB|Q"3C+@9`"&& ք,BۼD6עb-!7~[b~7e!vSICJ_k#Kwn( ˤ'P˼w # f 7a)&.K@5L$L.)ٝHZzL^UmG|bNjM#RI#BH'J:Rzʨ(x(,&`aHBw;\Lz'6e<Or6] TƶCN窽Mv9$խrc$ӝA8zU41L%ǶbpZVKY?qD~&W̓Er$W^Q1N.\0Yc8جf@?;~$qp;[ e~HO`J r;c("[d ߒ@({23m:)ccƯ&} QOW6Un+D*&# ZW1_E~iA$Pdl΢ v.,N6IZtjIkdsxz;/^يR nRklTSbߩmzz[Y(JL>*|V@Bـ&r7Q䱃&*E=r)R9VYyY-a1sKX ֠ X{aN[8Gtq[I94qsFA8"\Y:P/$Da\䈓ȄըU3١a9$@iㅑJmc} `:-RLpU`'e35r6ux﹛#Eby%U`5ǞQ꜒xeB6xјI&r!BcR:V}I &'s"#I_ޙRy53>,0b2Vw{0}#DR$Y%ad1+*+2P@RrR{oGw΀yFgZ B UԠ! OPI tIHy]H~UEwNֽh̨A)!%Vz|FWS7'.g_> ڹ8ve#IdKy\(&ҸT-g0Ftʝsta2 )t}׌*N&* A'T RBc>yƙ\N<9e(\ qC 40wX,P3@p&e6tByΒj}J,!&G +rS4ߐ53j6FmxDT':O3aR>Mm|28+]'96cF5Y:D %T :-bsԤj|Ɏ EpV u:Aqlj?VZ8A:)ƥR gsSQwD%DNq7*1]q%xAߢF\ GF J=F%&2` ܙ!M>QA!T nV<ڡ-V zS+^q{,B. ޗ8Md~5!ĺUkG^< 7ϲkNS;߿f۟GbqQ7o5o^V~s&CA}}$^o>m˾+|]Dk}⒗6\ ;.T`R S=(d4m\JlVU\yu2ńx!R׺lCfnR4)'=.^9K*h /\Rt4. 0'AJ3Lx%j/YZ$<. RtT$H 2O}F\k҉"ڗR*%nC:68^/ڈnZnZˉ%i=ax5rh{ƘCH)83Fs#KKK()Zu ]a5x$"*#y[jRn!\Kd 6X[| A42gbJ1,,Gaj!c:, WO?gx =:OM<ɟ]P`e?͸3#6 "Zps#1~_ȔM^J1Bd`B]Ȳv<NC&{60ceRy NBPmc"^ݽJh(fSPvڬ0j;vkF||ZA) ASIM9ɕD nd,OB*!gHYQ!քHdi/qD ('H0g7֤~s)XL?ED^y"nxF  $:-J`@f"\qBI"\+ c [}S<(@:8P QFI{d  %nkg囈!.N YLKEQiχ#BGxC@{\̳b@i̕ &rE p<,M#TNa&UuE?^(Q*EթfQb& ?}) Rn h"T &ROH٩]N/%:6e͔jn<0'l\.GsIC& bTmBp`2Qf$sYJ>soϜdEB^qEaD[qZ}EϪD9$(H9wX#+cCr&g|yUDVW)Rڰip]a lPWP], UXZhWŶ#!h[x|d^o [LՕAT*IԜ8Y׉z,q=uzFHrQkcRo1CSΒ,x|)sne٨f[8X14Õݸ?c}bSZ=󡲟7kJ0hcΘGs, Qbj.BȘBj2Ȗ)_ŷ"_Tc1:᭶&\+(JR^rF<)TL x<MӦ>ߵ־tsȝsYf^1 1K.),GjHbQ~G. x$ }+N-*VJg4kd]ȷ#uxz~0`HV2Q# Gx JqbeP4\+e<,Fl콸$0 B͆Rx~nSpڣd+3o׷A"<[ <pʹqʜ0S {C?ǬL&9O7!C:K\/۔Vqh!,QJk$tA[?z='z(%@dXU׆C.5l\keEgŬqc3 ƺZl}Cvv uSTn<Y_*ـ=&##)Ska,(BwP$'^ X.]`p?IA?x4ѽp9P+X:Zn0 f*_zC_0&wy2ڞ0q!uZuo6"`KmԱ}Ft9X8~O72[y53I\&^ I,}$/G66n$|>=5|cl sjѩ:VW&㙹4ͬ姿 !ȱR C*8h $J=AuM)8,b1hTzu%YW|+ْ8J_89oQJ=J&.BPkOc F:)*}4u^[#ϋױ?O<23ܺ$3u6F&"k)( RxQz>]W8|ה5t5)h/]sIhaX5S'uʔRyLbu7X)@̖W7iJTJhGD|)_i\E<$o|]wi@4GpU|6=;~I JG>1H4ptP W] 0޷X:ϋ$۵3"(!roA t $U!H(@8cWpI7:'9Vsp/jo׃w O[:z)`4p_ӧѰ^LGeEӐZ)f[o#ocЛagsOjmzs?R=olgJ=u 1۴wJ3D5f}E?=yw=w5Gi c whu;e֢wQ-T%z}~Zw"׻_2xcmiVgd PC@@? =ܸLzlUϒ]=N[ϒښxff{YxN|}J ֢j(^jdjY\ޚjYZjYY`T5U3C[W(-pբpR*;z5p{z"Tj,QSUVӹ*WNC%&]"#^0=">GGh:4l#*H`hC"я>hC/ l0L" . ӇID ӇIU0 Lxl yԸLޛaps4<̻!q Zѳ(hKlL(xx_] L9(VJom}5MnufRiap+߽(3=Xh2ǘgm5A}Bt|7H/z.tfyQ,tG?{VdX`v?\,bЛ` Lb >!얜xeD^%sŪGLy.I{e(\tp] !"QlLq1[Ȫ`к":T;9JW3" G,qydW!AvjCC%R.AZcVc&0=7{$=LZLDaR@<yYJD)݁ Ye%ɎNl&VgJ_qJC#|{#sv|yp㰟S]yuy.>ffN-2:z!skE-R2& c裶K*;I$ Btu!iH1y`Щ١ Qi6JICE <t6)vVddIN$l8[x$ם2/* vrOh͊ҭ˳ڥ-$Mϊ;-ΛܪM/uő&+Xy )Eʢ+W"Ĕ6QdH_u`vA$OTexz[Zbz,PGGVJ3"{(]g@=qpBibɺS**DDdNJ=ᤱ5&1)R\(H Y,_xGmiz)#)*B]RR=-QR>\@ޖ(Y'RۉK I/v֭wMT<)0OC !,fPN Iz"ky> s?%"ɓqp&\-wQ="5WKe$'QUMu2|`'|۷.ٕoq27'd0,|;<Bz[$˟t11UYTx"[lIr&O=;7ʬkMu' 'qpoo~_~yϿ@o_뷿:ΑU"h(1}ͣ+dLe ԛ_?~i4ZfV/Skܚۚ'~NWXrֽy\V6C@V½0Ey,c1U-kuW^?.Nj4)Wb7 /G$<"6E#EKxmo췰 hPS^o7N^{L%!/ t@y4}$JOqY>_@n>pcǩ"H+tb(AZ% at 0B8yi3 qΆf:q v5(;$:O{9U^YYKʚKpձ*U4aR+>) gTdv^Ǹh0-"H;Sg:fM,tHZF#F)Ζ2=8h`/[l,}YZlN=蟝dY )GPaYHgCUw ¦8]>*sEdù,o2HG,R,eb*7g3Nd ݱDy#6Z+YAK] ~2GXƽf=dq1sV9}-_WϝXd,0==K:pxyp;ؾ%E )I5`4#M+QLl 'K5#-C76gHP rN )DH"UIҗNc8['{M϶bh8`G7]Y/+&x6j[{}'ߖ7_狮 qo2*}ӗW}k+tzmk!׊6rW|B뛮~)co%߿M^]]%LѪm6VDHG>|J))c{)(K{ރ*>hI(zlBYH%+d!l_хpŃK(d!I])xw!GBdJ;\<L;vjjl b|{Z>`II(벰 S _t,m}HKdP"訲e[2P4 sMur&SD&z6g=I}7P,b3"Q-"x%8G'%Fj7ΥF*I 4Ŷ=H2`X+UtL`1FZ)Rc[-K\떶i%f}ldzAl%E..:u#";::y!B91r!pZ/X[..=lbnp& ]"\w~|G8zofcsG8?./>3)rL]Rv(H,J9='Q?w(qk|E\vTx7 >vW/7/w/ŃIT킹(I!e_t;`g#dO*KJT<{ x wb$W k0sS]Zchwo"zxiKY&ƨTT%A8Ei@Kȉe4))䅫k}72 Yh(V^s mnيϻIR^ā|VzxAt qM}fTS(ktl5DG*1jmdYb) cTHJ!u  DS_~%RDa`xt2&)JʙH!3,HY_("VYkAѱ~Xa+ qL᱕ Le5%d ̾% t4 8.A+9J1 -(Ԉ^ fڂlxN$g#РSJ7 q)Q4- le`(F'b_GlF|0"VzC9v@sl'ոzЉm͋d.w݁4N٠L:MbLJg9O! ݅L 5VI88,wRr'^^.='/1{PMXDHEal!mjnC<=ӑ)oȊOmNWDRF!. eQ![d(cp.A5A(B 2'B(~rfnx~?q}H/Jڕ[']83#~I$X`j|)$Dc1P?"sw֐vF_MrjT ;ba |gN+L3s9 (JT$V2G#bK2^ k2N(,ٴ- -ebJhsI& 5ZڢV`_s 5sϨx>Bc.L\m⨺ÿ7S7K%94Y8+x877_<5Ç GDI䵗ҞY%#Xk $dNwMXo UGcT;(Sf =cmF/M|)w&O٤HLf+Zd[ZjYMdZdaY/G:A:exM*3~TG_Z0UQ^QJ/xinsO! Q<0;By::p]/ .K@i;]J!zdw\8TȒP SBqM:QC+4۔Se- 걄'0p[>.Pf_(sq{B\(Yk=U6M|n~}6$W|qP.nۼ'0yQ~yU 0dvY<-Ezi.*hxv[wUS)9WxҮ>HW~VF"/p0"-BƖ! cW^z.H_%دF^#Eoؽm3p8ֵѮ=l{MHh϶κk\%1# _M VQʒ$&R8kN }Oy!TR^Н(8  0+$1ƄQbXYy2}Q*Kwd8md;s_Lk]uyc#ήϬ\MƼ`eK?BX< ʙ@ma}WqܡgLi %Z"FX(v3*6oRi#hoښP᱈x*Řp[\T dm 8,+pKB)tP1Cզ%[t/pP<0J2/gD, 5h5D}M@BHʴ䞇lP% e~ńzbx9bV2P" GRQ3e)-]˒@#7cVFEYtL!x*Sy~~W ;ç6 o/ɕ,Pn )禠Vƪ4 Jhɩa[KY#KJdBzixƓuSF1-.25C^1ZK XktkďO O=GoBuq%Ӭ~Sޅ bo}t&oq`|ibߏ7F CG#yopeE~g`6=0'fj!Ԧ窋jS4_[thL x2(<0$7`(Va꧞eggob0s1ӪhZ'^0VM4vYV fד,(7mx}&+ 4'c@.sZG 4Z+ZQǴFDpQ?Ho }%!Ժjڬ :Fٻ]k;`(/ia]׸'s]NP* R8Q=BzQ]ʧ~v{'m4ݛu&kGdˣIn]~ڲ^yJ8vC'GVKEJNJЎ|9[JCF.H9Z"'3> AQXV#v(MDNgy>1ŻQOY!iQI @Wc_u.S,o}J:ω '/wjgDϝ-D})q HUrK_ c JrJvk])jyս|B.vS[zX朿` ƓVcvjw -] %]9o3\2I8" l)%gi¿2.PuȌ?7?;?{>avyUo Ee.<7~p#-^ŷټvfv;M{/]6o'6( H{>̡[M-=M.3v;#/OZ?vz#Vbυ8gah AA]Qy'enib+y,[ܖ~,Qy3nQ:W=[E螜e. +a+-X1lNZ̢Ms;q#q݂gdδ9XR-XNYHG 0⯲Wٻg_)P3W!0&8K+i饴B(@ܔE\8qc}I p8ZPvy~kp?ϿB.=?ILP3= չx|U#>hc:FH~d堶-~G'x-4]E2WT »NW%=] ]ItBtHGpO"Z#NWR ҕ2K]!``*pBW$]RN47Tޭ !7i<ҷQB0?Dv6W[0/T:?sΪ eg7{2Ɠ MSJsA暊2@4(B~7su]}RnOC勧^`0Z"Q%8fٕe42,{БGqV/AU]BUҒ UOqVVg?㺞 <lfQ# W#` D엟e Т$g$`<{?guvr8;_YlkRkk uXf[2l%>EJNQ"\j:zUU5p8 B/{/]FhّWCWN<'T^rzc|]=E9rFWٻ/dtr3Ȉ(flh9ɹ 0agv$e(ǻw[ X ,Eˠ'=QY[8;NUh]GL5yG CTniU=P+G-PJedӯ#i6~Tk_܅LՎ: xd ASvI ]mhXQyd\.P"Z`@6QHjՆ] B'DWXh ]!\I!hu"J){:Ae+,$CWUD{hGcWtutTh]E'CWLhu絫ҰN5Ȅ*V,p I-tQr ҕdh]!`*!pu2 e`DS+dV<* ]EUD{J3#Ht,h:]E׮N gʤ]ΩJDh_BW}?]Ewdx=tzuN欛"4GEKMmno>~F#t3ȕkG*]K/дiz穗Bs8px2x4"E}Q>\ 8&*?ʫ=dZJr l߯ŵi٣"fH{ ptP4%z%-B/{H YeCyv Kυ-hVKuFά*v2bSlȮ&x٦FkL_%"8MfYNq)R%4X*\DkT-9D)-*6Ofo.>~xq3.<>> 8ޯ,w[{'7 m|`ua͡Tx5+“ow7ui?`.QegE@_>+yA:.&t{>խ:cM9_K M JPO'L%Z/0 o(2/6_,OYYa}>hՖEM&'Yb.]“ۖ_|i@ӫ X13v4J[n ʁږԌsK'Y{܅Tm8=e~4IM_=]~0XC &I|"z Y:ĸhΆJQhUW!|isc$2mNP1a#vƼtk) 9$JFGp؆oF7'A3M/p/θqg\3ΗZC-^ϏLRc6ȌRw)5/ f79͍053݆nv&$aIOΖЛ*u3Rˢ{jr:\sa*YDE?gRt0Y6n5uM3zwIrNK)7wiڻ%DOX?)z``D2,9J߸B0,'Z0Dv5x5zN\8HAJwHV;=[l4#B}O߷`Q84|gbjLżW~5"DZ`<31gT1ˎ,T$"e1kEV&xN=Ƀ"AqV=yD2$rH󌞡j9VZ3gOtr:RHUgX;0%LVD+ׅZ[&U uÁ4(o)Ӱ\[4,iX44fjYvmŢY0*킊6[+?Ms /ʕ*z,] Fit.k白GrȍOvXn6٘bqhlrʩfYK6;mmY 눞ԏ,"V19#Sg>1<ː"c s(1!{Zl͜-tBjO6fUSEkYCja=4Kg&Un-Q=rմαs$]݇mvR7n߿QZIQR R@e8~K^cG:r 2Mt[NǁO,h3JR8e̺D2UrDM!DzvAɃV m'A@fDBD(Z&G&p@ 3 jdOKp?.|Ҝf">f[o ^ΈN!:4uLI"2T4ꐌ9_imm%x'4Ef_-cU/%pXo;}J!+>[th't#tԱ:c2=?Gz sH!ew)J4Zc3XRϓ &霉eNyƍ;+ϞW!,N},}[6ϗۘӛ4˴/uQ>`Tz}Lޔʕ.+ X>S&<֯F}g׿B͊?G{S{<']sBltog:6YWk[xt`xf[v{{%$_VfU3j*sT_2IW$4rUe̐档vYpSIS[oSy-CVG,( W B4eoEDc &tt:BhS1)aP\J%.2C6s2C%x ´.g6F=L 62f5UZ< ,$¥XX<0/ܳ0I'nnY A2i.4&IIz2ۘ%a!xY+3(׌LZEldV E &CRؔFQhGI1P>)orLEZFٍa4y*ݚwڢejw vk#&"FEz 'ebT(yY8D0ĬD"d:v}B NsE !WLF$!HE "dZٍ_`<Dl?ED2";DqxBw<*J ft"1Ȑ`*"zljcZ&i%rV$ dS$íL.FRBY ,. n-#bk쑸|UI5SlKE2.;\|P/{\ٹAQd8p) z}d%%{1pT58<i hc>JJ*T0We6!dB)e@T[iy՛si.G[ii. nui/本tkswzeڷ?%E|7D~ŷ@C2JTW>U$ ^u;i/}~i{r~˝Ą,UY6esķlGPb1)cʮw+ɶsr_ٜ6}i܈Ǐ]?,vnj1#Vp3YYp%.A$.EJCOVDI=IBXVH(Mjy rv\g|ʍ!Ƹu>2sLm Bh$PPVi͜gq<i7_Uvof])R9vV1KofgboegbtJRng?*3+Yet;Vgdݥ [bQdT"oP oY{MXmo sh"d9`*:"p KhT$ZTTFm.[Xac ՝)1;Or?Ŗ7+7}+˓ *5 h@g&)#ČR\8#AA(:D8 "_q 00d0GH9zOt<2(9RadIų7 iXlW^턘|ѧR];j{]4XDg ^fۍ7 <\z!\J8P -"%K`1 TQALvg4^b=!χ<C,[زE)g#wZJ)Τˮ< O?=Hv#8exQV+(#>h. :,4* 9$fUY0IE}kQ?}CSݗA eԓHQʮvHK?ȖR ^x^qt4aD[ @" f6蔕t@dAhjrNKK]"u0t?h_&L]&K[D].ݛ^V9F5H֑ᚿƽq€al**w C2Ts{/W&-?? 54Oz=}uJV(V{{hf߫M~8}-7+/Ui2WgB`)vF^$pxnI|S2֫TmP|!=LP_1 5*k mT͢պ=ځzC UZqߴt4s3!:ΟnȪ'EM)6'[6a#*@sGه԰yJ_]is+SG¾J*}MrSqTX->SBR1 E-\D%҆% LqsC5|:P'(Vd`&w,Bͧ.q^:6h={V0L57Xc[pl5wVf=~۴VI&.#ց^Q&7VM;qO]d|W)B@3]\0X9_b+(2@JF/tJtފE_֥0Wp {:xГzV2ѳLswہ?Fɳ`)͐  RhyӤl `\7e!?@~2iQh 5(V8dj5Q$j~ܒIL34",ȘH,Nq &$sJ! RSjo; Q1G EέdaXD.sGkZ u3qVZOb#-}&m3LcWa`Ë`>}"hRpI9Pa2EKD(cG8-A.SޑQE$v=e`fGv<c`Tq26=XmL s.P&:HXG9o) L^.kJ`hVtC(XnHTaqP|&黳*(:|( U%,F?Hp]\ʐ4#{ԴWYG%R-F)ȭutٶAH忪G߯`8%$nT9L+n=SswQߞŒc<8{@Tܪ08?ǖ꛱  V/p60;J2IL;;!q f 7a)&.K@5L8ΎSGZRV}:.)$!SPɂ% 2" ! Q0e$!N';{U{3ذ ֞i(s{흽*/d)\s' UvT#>KyV$d, {W~n4n׵]zߧ:E)rKߘ۫Cч~Vsv.#p>jk*N\p|VҊmd󛼻0yQIK HVkUYgvw;z/JԁҒ&T=4C[o,^A ˋ6(nNrlCcnhoVKwQ7~R mjej<:(^]Q7f5||T6D@jO[zۻ/>凓jH -I11vhi;T̓EJ@T^Q1N|d7|SIطTҍFlGɊ7q̻Dfh3qp~ B)GSD 1 7 ]޲Пɛ8~Iܧ>r>A ES78#~&8W!.ŏcxcvllɢ v.ɶ3T{/RgK%aAKA)12'̃u~,eьi:2(:o;c  ye4zl5ͭeg⬹XOX*-`~vͩeץ]^>]ϟomA[hpcWBVQh֯L YfW4ݬ7=Vm6LdJ;B.fNҕ&\0Ā?.&t2Kw2٭Gp4qf vonͱmy0^]|:?dnWW4M)5uSwLg9KRwf>~{)BB֥A e H\y-+LN} u rBtn$HRDNK,4FXb!_']F##dIB`\+ /]@TDTh/x £fhreqc*h{ʽ3ø)aɌ:t;g}E9m.=ϝ<ιOF诟z.|]T07;J-xbʮ9]P+{vJ)v,gW~ٕJc<:* X@aG TZgth`/ ?5x +v0%Q s_F8Jcyᯚձ]M|Ҟ/Uw.R6uIޛdh5UaŴ*sQ_+g nZ{qoОoI! AdfT~TiQZ*mHU [竕}o(Gŏ4mQӤɔ PF3}`(W//pzRhL/}c ܀jQsmz¥v{ݙj]ʹf-g4[kAOwl_:TŹSJ%q@$Ւ2dkzN40rsf\+nF˦IrΓYV ~ 7+׋b<ַz;RN Ga)g'sf ںV }I&Pס+T{ CnLݼ9,L\LG"U@LvyQL>//fCU&LOtlw|=6WHSύ'Dό).ϊ)AH)+@7M^=D3t@* +)Bcޟ\%)34WD M+ JRv(*I+ؾ$圃cB;sPUvu{sR95\IN ˤJo+f ֻojRErl&i>|c܇ǘSN9 (qTKNMn?l5}0&ԗ̗~m2/۝ƪcy@? !q(?IkWp{,D$2{}f>g^)3{}&>f>g^3{}f>g^3{}f>g^.J>&-TT:Ag:L3t;ɔf"Agx3ttЙ:Ag:L3ttЙ|s!(י:;:Ag:L3tt{W5s%Hwd|̠ h%&.:@!.% ^*X)1tbzE^A>wUzݏ &\r<ʥ"Jz!1։$D`цb$>p[L]o1-0lD p@VGRV1Qj!P.ZM"Xq xYN$f@.+K9N!,L NUaaǓ)DL5WAva|wt/t'SYuqLe<6wwۼ0vٞ[T01׎Φ0Q GQn;Yύb$-2N-P2zEXR.L`Gm7 ,'`&2v&jVvml ״mnF26^@C/۪ݭݽp9 4' r(hP"Q%L Oٗ,=&_a6vOƍF僱/lB!RedgpTIZd)%a\&e`LGbD(n 4 (^HQ̦ԅfkFh2a;6 !e .%n'1OjW[ZmQjjVRW$&B @x9`CV]rN#461qLg*>+m\8B&ϐ ![ Z$A wINT'\e{XMp ͰbC-leo{m mAd[t`9cLb^P)/}V˜1M| B$Y.dgmmqy.6M;כ,VHC@dw.6"6̟ę?:9O{B̟RY?n{PZ(BPJs/D9?1!xBJJ)kbJd޲GepFxڑoC,ǔ*z~D*xjQfv늁=6_;'UX:Ĝ2\ L£0֒Ngzf(d貊82Lբ8:vJ:fŲ5 O ]۳Zm>jѓezNڹ9+yG G67p*d_mdpL-Yw蝣Wv"GprzdD2㌧rB V f/FR"(d2 9!+rwv+jv&LNN?Y|mLns8?d^/Lŧ ؄8O1L> )rYi0ɔdt)`VJ*RW*zooдw3 ,|Κr6=y%|c7(-MKY^jT PeVz9Am@`& U7 {-@Uvh9ӅuHS=^UΡ(% FYLA@N28˓6BL9OeDk%ҁ,]*kt'\M'y?\Q[yI*3lw'4By7&yn賏~xr7 W) W#O;ܽ'}xͩ].|W\1u]sJbi[{Wu\=~FL+$B{6L+%?saZ):JRi;dZ1*-L]iGJSC_YkF)oȣՍu17%r hMtpRz\,\ڱ8F7'W6ڵCm _ *v.\b^o)Af :ro]r nlGJ-wGP'NyQ^ Q^It"JGȞ,5,78p ./Ƃpm w4WKݫ|Cm!oOo{~ϗAM3b̮׬ddǕEk_/?Ƅ90Kp]&#Z'! A+S "Ĭrp 1L;\[Kdc0Y:=A`N# Z:RX6f٭77,w߿w]S,^p$ |+FM(T3+HYqw#pRxΐ>nƨH ϕVW8FQ#`&@0`69 N=n>noJtaNR?FCaJzn;4#UKυL~r84A=YcQt։"Hɲ11 jdಉGe-?0aN hEZ!sL19 O2.(c f]_YLs?% `Og:j= uMySnUM.pvۇi=;$w ^l V Ki=wt[D*(V:ʡr$$} O3]t="Zs*N6ShJ9fP msɩd3f]:.ʺ|='+[z1 &ăft]c[I.x$8k}5?IQ:Qq(.FvC[@3/0EۋAo4Ifo܌` Wd1+^)fJlG慲VD ExW-? Z)aDeLe![D[.}7=*9s?On݈DH`#4H&upqP ? ^>nbL]Hm1t{@uk4.i#kN~,ޤEvk %Py\&yY=ohl=>~Sׇ {m1TPssZ#&c\w =o=vC6C0م_Aiz6'Eڷ2 c1JfGf˿K @ G_Vk//W?onY-x`͕0'q|Sv_FͷaڊFluݧOƃ2(5ۮvKDS^ ~~x}~y2fh5wqM;I2.<ϸLDH ~ca>CɃX)v1~lB7~L_nx0\f3B~2_9u:7yuy+H{W-_\L`85Oh?<7{| E>}Cqu B)p) 3Sk4<DXYeU i @뜁NO)/{ɞ2*dm}e%]hRg]IHH* ReFV!H 3*F/i}FW-ZmmE]ٞQ'̋cÓX*'oΘP*lOIz!@9"h#ȍScl#S@1&5Ԁ7A}`C=Q/S& i_8cG!8PD5yv8T6ׄ]H4a}f(i- ye!2eo@)?a|^efu%Ae}rQ;qIT5Y(m"M&YDj keJczuo@{UCpvcW®B+Ψ' fEO"fUzO"J['W 7; QJmZ\rp탊,h@X X!oXKi'CrϷ\ v6{oOIEYǷG#Rw}^b6nDnU'F w^4,ӟ^Է8P+*6P'RC9Y Eq!rAZ"Jy \gdU,p-",700k󫉳; vHܓ/H3݄߲3tZm{u\+48w'M v6"S~)7{>z V=޵6r#ҘIۼ_ C$ I>$W[YRr2`)vn[%ce5EbXqA.k4,5+n9.s)N,/8@4_ٯ\~k$h~9fISDR?})\L?`%ϣHL?J.KLgi܉魧^p z7Ki 3vqf383H?Z1@_pkbJ`[y5f▱ wi!/ԱnޯoV ])y%+.(EV|hsKSsg8.IJBȎe`->IN&y g3K`RGSJmΌ6>)D@"L9,F <ݢT .NŢjA?vLw+hC W`}: 9N*Q+ձ+RԉW((ZW1`[*+5?z*QY]ЉW&S;q v2[Z!]\%*kWZ#12FNHFf\ԗtIa7oZfW"?L$Fj |k9|99s$s[p@s|Ų\RYl!]/wq H<7uw뼩R.Ǧa8}e܇b^3MOCտ}\m}/x蟛0̿~)ڼɹx&v`=a;TUCP:TUCP:TUCP:TUn:ĺgخ NNȥ cr:.D.#N y.DBne~=.mzPcOwm>('iEk_\4hn5usm?-nax Pڿ*ָX%3i9}oL?tG^xlQ:g s=:tB[mA?/O:p9; 0V I ʃ>BI> R򕠁FLta/!rƝ`C4h#4: _Y"#!HxMpJz{L@4(Y&n/Ք=][);ji^7pj/,x0ՄDF1b#lB{5 ,Yb(zduiG Ǔ߉FIOغȅRG֔V$ 3= ^tTQu*Ok9G;8?8U3`%;o;u{-ch v'DKûUڪ_#ӋuYhXm(IZGԳ-]O L "[җiׅWb>g.wmxS.ҋrSNOC2p's$,F`}{L°_ձ?W*A$&7Oη.}:(2g#0ͳ]57 JҥH>wޮ* BцlyEx/}L&\3'R-q-bacBIR LF\,rnB9(Ƙcd<2a#YGH(򹖜N)iL8x) h؃=tM[HaA0}gfCՍ'Yڰv*ՁT*T3hM%s(П sZD)30.sk,Qp2)P(P FV1/sLbV+ʟ)NdN \*.6jo[Q1G EέhaXDA.sGkZ ?D hlK9dgإ >J}i>EۚδVZ{zT>N/|BWz%>+Jʕ^@-دJTR] ^1`,W,z+&BN;An Iqeңt%{lhg<.)@Lq$˪/ٻ^W"0tlb$?cO̩PZ ȸ)f].'+)"">{`L&w}Y +0/C"r6Bgװ7QLݙ)pkgY~6 gEHB0`"CRB3ueN?ltR~|`CF$(ȾUWJ|) }6*MN5~FRκ6^q[H/lOu*NE]swE b8bF̭~[c(Vx7).o`60xjjI7-65C!X76,B K5LN>]&+:GklխN;jS_@lHaFRإ20flXT>ce`uTOZUT3;]}swߤ}yo]a.zݗF`\ l"AIy'.c_sVV~9~+\[f'v.T%M ɐ#5pNu`+2WW0 ѶFa: TsQc55lЮmvo[_:v@K0FDծ]aLJ2IL;!]]%10x9"!0|tJ/qJ\>0MqcFM`*ѩR15uϫ_Q~lA$ޘtZdl΢ v.DUuKuх}(ZW/ycŅZ "8h#,cq f>~ͳOvoVKUFM`CY:(KĥX8*~Ƙ$x۷GWWߘv#7,^b?r =7ȓDiV6/.4Q9WynH=Nrzz積\|t ԛ jd\fȸ,e(,lckX@,&3(ȹ|G/O/;;==A<3bH>"P H!g >x䝓KJPYkiebϚw s_n<ξʜVuC\%ŸG\||'X+/7p 8.6x4Myǡ#=-l*S *;kPDz{??:_޾}$-vvLCvJN#1eOSH=/TxӢǞ,1;/Vwi=Q _;ٗ"A+O=@ZZҝ{xcMC'mN&5p/>ҸU4q,;|e^F+GkyS+e-%ѭ;-{FSCw9hJrl .M`i[Ƿ#+w/W|?Q2wڣy! %~?i?]7^]JZI$zyFqgm{:[81<)Xb 4*/:l[>DGkQx?7d1b:dX[ؽz}]dt/1RHHS.6r,3a_/3u˧]̳NO7y|wG%'PKgm?5<퇍+y3+(K8Z55ƪUY5$pHf6ӌR]+Nf-9R~JdNk`BҺ9hb-2s?9FD -^jes p>mxW5v_{aM|  kb?;XNjwJ:fEoi ՁSiAf#6J7G1v;KyiѧbמQ֚]9T*؈qunD1v@1v=T旾XGxۚwl_f;P8?]kvQ 4R~?BICJVʜ?od,}s,?y|«s:f]/{~pI7; ^_Qs^{5y>ڨޜwZ؜{hnANE}٫3Wb~IW $z*[.qPtM>>9lfQ%νx/s;] Y$IB6#ߨѝQzv`R;s|qlJ }"$k^Q_R|}^|{Vp?p;u[`ܒUQF|4{?bY6083 /ͷ)﹙Knݍ9?~nuh|I44|!)%ҩ*qI% \SNվ_^<WM oW?>Гqm.1GLc+:Jݜ\oIlU")fZ߸]~m%e_0l K",!Y YsA|LVJ?vF{:ʳh>q Yue~$h_wV4U5\B ` %s(Z *=gR'S{D~0ܐQmY!-O94vE%"H(,͋5]K4{wIwɲ*4lr%5 @̒Jxk:P,ҕ%G#yFn Gj٥ә|Y.MsI2V#Kf Jc9"frn/&g[I:SX# gB&x n!\JcW<۾X a-=70!Ю"!vPcfP57.}2mۨ@CZil,`ѱ̇y0!̱zW ϫ tk)>@A7EGdia;Ns00aLΩ<,rdP n؈?%`k3@f yWb7M+d*D(s$eaB+`έƆu/e] %O6g= Jג&)RM8P4@ e ( S'4,$ORZ=H6TI1E۰X{σFF&wyNaiP{WFd0]lAU Lcm0*0EjDG/7xtP%daV/3#_Dd/B0e$ @z; R z=L5IX!{|_D W'sE@WDNv\VyM`A0 S! !0-Jcx)U,JCɈO&^Pu`-s9oX`δd:$@t-c|}ow չ \27Pٗ:y^5d@2(˙ ZV2CjC3$zAs! ^s dZ&(\b N Ӵ({\œ'o-/BolyWzó3mF)Γ QO h+W?kUBHQOnVkev7lÎ 7CnP_q n7ڕYB~׮%]"]Y0+yg *BWV}+D_}bt]w DW++t֮<~w;R;Ջ7CJUJsU %[o? 1PrtIeu ڭS*77plwׯ Mo3g46iE47ڶD5r"AߓI VO53rf??徽ׄ7h#xJ|ag z2t([!:iߥ :CW;]!JUV;שջzF(MWDWCWZϻ\ \ߙJ[GWRRtutrCt6C>DWCWij[ޝ++;CWVR;ʹ*]`CWt&D]J,ҕBw3tp]!ZDW ]{3u{)/{xms:оo!Q1 mj={反ڥϳl;}ϭM/OFl):FTKg>³̷:vP䬆9@4,c3p镶[ԫ,,{^~ ,4G.܇?%_o<SHMYAY#{z6c9S1;`xൌ*Jl2'jgב\D.Ӏy /ScL0ϯ6p 9<n͗'b_qha}<) Ø#xydwn/cp_ؼO>Ï_s^v]sC4uql:S]ѵWc3oÉ &+V۶S<[ 9Ώ h8}zt5yoTBJ g Q"]长G󎙝_,%]O4.X߮TshOV ({R(| ?yӛ$@~^p)F},d2ykWQ De,[,Gc)"(t2MGG\>|୒GPa7 >aH3}ڇz'0 Z)?YxVhaD7WxOo`%J;{mrk㛡;zJg;k:k!Wtr~4_|IR$}(ZG[[U aN%wΥZh?Xε: ݒk_rWۍ$hmlx5<_n}j }do❧?븵6N2dj\ CqB@qj?z[X(+k Ty)h߲>Yv>e3WTiI ܶG,_o|[u>N =Z<(uҘ|H1@$m >xJ!q,[ɜOQ*H4VYfbZJB,>|R1"Ė" DYV[dˁU;ߣ0V>0H =?_-OWyx~w, W\4ɻ5rpwUEw9n(&H2y&ՖH!l"G5絆# YL9/9IÁ*.^#e84y_Kۚ5jY'e$7UPyg90#JP d|02_N%ܦ1(0we\ }gv>h^1kevN:1Q<~*2ύ50)+UT\jaM`d;1 F\ӽDIWGѥd+*@Vu.Pp1*ǂVբBi"mw"۝r& P >U1d\U4W:j)dy m1\#m wgқUzՊ n;͌i6ދQ/+@6c"zcRLzCzdTck0'7\gf| xnv$SozגI\3<\LX&e鴜t$x/#_-- +AeGc%H` mb֕r3y7 [25[tyꝿNB.y@9z_>BC$^qۢQP%[l4{]U.˿49%WM;/_q. QMJW2#,>?KsdqFskV`Nç ߝV˾m-5Oxչ2Q1&&ot]5bm5j&b.IQЃ؋'eGO/gw4NVuuk+7LUS\/9k0'ͮ)W#|5BuGlռx~g'`yo3ۿ?ۧ^½=/~y ^/p:w> 7_T}˘He ӊQM׋OO4MlIK*Tmr8@wREk:D0^Xvtc!X,Z&:2t qC<,X.܈Z챽58Rcܙ#=f+vfQ U)pL KBAb"E+S!2VM7ZqXKL?i-& nXl܋>M o+;Y vc2]?{׺FtOt)IQ%x_I081YI4ru#m}ȈdjJlbUAYJE=9v8v?[2~ۜxuؠ>j{lDc;,7>?sDv@Fሰ|(yF2b.e 1L|B3:{D.g64@bNgSibc[lQOD?G?l86:Q& N~ROT`ad.=B}>A狡eL;f|;Rl:lʴ~Nq|LB3HE1Wm~ÿa"/0vJeYÐk }0`'zy"z# x)[KW%o9(pX)~ ӢHHJ/C)Tw% %ԱNmL)E b9meM5Q]!ߙ^v3glq}g Ϳ%U_G}$?ؽ~嘍=CvÇx\)Om#]w;M[ey::Z9^uz?w!g\U[khP.|A]&yͶZAtЕ;,9TQ^_o>i3|k׬7ջ쪟/{;:$ftJ P}`3(%Q~Q)DZX/S;.^XlnfOw}#] *\_kl=LLB5N>#]p_$U:\\916 U 4E?D:*g&H/O\:V-L&fdv~+gĴ.ô@pEQ WWڬaP;ec^&zdb&{)Ҡ*YdQJ&r\s\]]+(GҬZfI- ZCum2AN)%zJKC47 }l̇ʈ_o{y[{Ou[y|A:KyOR&BvQ hJTR%leŘZv&eM UbkH}`.R0nMk!1.{KjVt3g ~g}u<~^޻W'-x]WLa:@8W;J0#U6­ ()D_M:/Ց8:&lcvj5=g(IAFC&iG>jR=B2PրI1>t"tJ.71n^$5sҙ(^$E2,|&:TDR<Ao 6"\~[}  Yfd0(\NYY *t)8@2k&lP. wӘ 3[rZzJKuv hK+vJb;~KxTژ45Èl1]lu4ђ EϪ(<}~' =|ik C\ zۺG *JXq Wq"IqXӕqȺ"Nk< {ӕuB,m>[rEP\ؽ։5ɥoz}p9roMRLqKyu#, Dϟ*U?ƿL2\\w]?fX|AAn}yYnY x~t%4/{WԄi{j yb7.j64g';gX<cUoJK;>EwMя߬1idƗ ?x@.Kmd?'W.&j_fRiK&f ILz(*3zyЬ?^6^JU LwsO4LPj5{. 1iMԸ"=yɋ&{¢y]wJ͠OD8yh<|_Lg5Kw͜Ho8l %JbGbIpZ8LJ:}A-pgw7'@8ӌI#D glM: D<2TY^ &sT"z^@0~.xKҾ}=aUp}'-UG|GzIb?t#anDsGMjDǒUzjDԀ05\rDpVhJ`8JRqpL$A #DGW@.X :\\=BR2(H*,U"Wv(\%*):#+M*,w=P5 ĚK/VѤv>P@Jx>X9/;BqP;!ᴃh7ׯ?FU}a6{83<8G2äesJ9ƭ9C,C6#W=?VU `Z`zt{.bٳ{;4;v%uߟDկD &ZD=`Ui*G_l/@ȹߥT )0K6wIl;lɭN+VJeJϼgg #y2E0ydX$1nERzG,9=Sp,'PK(9d>P<"59JңQ:\%*k\=J۩FGW@0ch*ձUVCD&'zp`y %D%'zpř옴+ XѮ\Mѱ爨?*9'zp%8D\ GW"徦W"mJp>#+ɵ  \%r#D-;xJTv(J*\3VF}Mij/S&(/>w=_.L)صA|tim.h9F Je5Qe߼w?dYcnbz>vG){yJ44˼1PF˞T 1TBYr֒ik CmAwn$>Lҹca'X-@!ӯY?b 8iw3J:rtg? ta SIb$ZjV*f f,5z7ΙD7EQ2~r(~pӟ \{iɳˮłS]񓬓#|6[a4o'+${ଗNGݏH)Ojy &!uǓl)çRNǻt9ۏBc̔:r<腼 @+]3tny$XLjA4Ev[Ƣc\iPP`hYUleiAe"juY 3H(9hU,wJT"0wYl6}r1h؋[KEw[;7a>Tp]Iו){ ֗3j:R vKtDc2oxb9G:/RoIYK)*VGT1;)v4c.R$w܍`oo|ggw<鞾@5m{_|6.Zx!o% -=F3P!D^ Z5_SvuheG<]y`0{kn?-&tX}_qxiUQS"XOf"Wͭ=Vヿdȧ{V V \%r>j@KTӭK/a.Y2U38<ø;ʋ^ $@ۏIDq0xvIAٛ-)~=ɜѦ:)Ɂς>1UpT.[nn84w9l`iYIu5 E/\v< fX_ 5ʼ,+OVVzСuUV_-8f@ˊ!|%WYuu7ykcktf_2q ?F -^=PTmS:n_X%bQ*}}#k\ msiӝrջ˃yiҙ6"nJ0+z@Kqd!R,lߑJ0+Cʘtc ֦zAy]Jz|?L[nUEgq;f`܏WVܧZʄ2a^!X&HSE57V"FMsFV;||I_~e!XHTfS)e冥70a"6K3V.B]%{q-(JN؆(n΃3I\}E֜y|;~koE!7">܌\瀛Qp3*8s@9<nLt(SXiG^:9:h>/yzCF$Q&М[ٱ΍3>'ArCaU[|  pE{95ؒ\Ão5_uAGE-[BHxgPʔ1E0 Q9$2a5LѳP0T+uHMLc1>0)&8`qOpgYOvn\.tGyC6-rз%*~3&JFaJ+d3gͯ}r{CZ2Y #3JSgJILѩ`3Α VJ*+HXG9lRbIX5F.2Rʃ^jF,+}$)#)pHDۋs"m(pDzI q0 [GI)b#VH (B4V`4Xőp.uXE0d:WQ25 D4+谨Z ȓ$c$c҂IHk$CD=1Zȁ::Fȥ#p4Bdd,eFɯZX04㚑;_ׁ_Q 3#1IJKA pИ Ѻ;p(S輸EΕeNF]Il\rmP&qxSn1nm^/pm~L .̸P^Ƞb+8`xaS$^s& gd^1T)b+t@Hm @^ wE%%*۲W5r֏ȜO>-Ve>Cb 2gdt\BOգ{lWlOhG1Qi[;( ?ݶ|l+9  OAsRN*G0=Շ:EĠ(4̫H^pH'P;;!ZUd =!"}  HRR%n9(sh6Fj0qK_RX4@cPf\,~/ӌɇ?x r4SzA]GPnT!PoSkV:Hr?Xq.OۊVWbT)l_o{/eboY9 ۓ}@ Z7\YPaαmMN%{Ήvt i&LLr܁(PHx3jg>L:Q^1pꮕqSھ*;wSҗ0ų/g,n kX.%.Pކ7Hn.CY0m7'tk!n? \`<{i˵y_S5{sf !'hv5(_xRlE}dǵaC0cZ@*+4jZ%NރzTI'7T-L/wngWA㜉`rFϕw,FHjl&OZJ6$BӖ %w#Qtau"% TA!bS !!D!e!x0k5f,`ZF $`hnD!l5JB8g}٩Ԙx<٩r&(Izg$")w~[S26)k7OXNʫ9 kGNwIN!$2`v)zn#i󖑷B}eJ7B̗}T_d 62}ѧ-,)<'%%LٲnVX#-SZcƠSbTr3FȹGtgJgXgl((.pX+,\R6Ȓq 9ĩ|@{v81TDP&jI (1SHd&/!20g.DԲ:Ely\wqaK%8 A1TBݎhF&Jw؝xVT}Aθc[f6Q`׺i֘P_FP uD4's+ 8"s&S}RYﴠ2JP<*Ě,2% QxtJuTvk~sq_3yLj{Dq3BM(A mP ^"{0+y]2 %y]INh2jTF$A3eTGMV!cD쌜{8.E|q*͵Oθd[\":%8L.c\܉94Dܞv=,y;Z1}(lf%cΌ6aEbA9Y@%OhSZB Ԗ){ۂS[P4Rct4[mMr9r-hR /M<[3r6;,UX{.UtN1fT6\W;zJ2=5̧6hf9*4\^` Jj@IiG5cR؛va}l1ٝ+lno~P%o3>l\\+34†਷(˂(X Hʴ'6zD "d^nظxHxwp'VEc1A(|!`4bC_/R"#>!#ϴɅ#P$6o4wג[8cXMS뢅"[6P9ESZyD>@!ΨX&{7{p>1r<]/"QJ&}p`M^d]9O#Qg8J&rBGLbl_M߲Y-{GC6IBQךm&";񼁛Fe5ø*r"C1,1 ʉ$.DŽ1,\! )=:o t=>0iZiF*ɨ0SEe?imt?;]Tjj6L+ @le-;BzYV[ο>MMM5eO݋ ޻ _,7,X?o?irx7cNC;8:<{]=OOF/Vo@? 'Ke]d1aIkOm4dD~ȽVX~qyGU4#GU3mk_'mޏZ}:~<|@Po4j7[TU]L%sgmqOqqO}½Ǫgᆢ39ܶ' &cL2XэY ( 凳4T]$LxhGD|,)_j\I<$o|}rDI#9J#%$ZyKZZd'P3raW2RрD]ɴ"R$ GEc>FE'D> wz|^$9}<AY { Zc]mHRn<b)@FU41FQΥEcHu/`3FNgzp)m[ztWOB>Aj|1ER ࡜M'ԁ%wL,8 QJ5˱_ A k y)B]zŃ:H_-Z >|gV(E'LZLf٪q]P7Hu @dY2y YJ𻓞80jVGuoV(  jE QJ+>R&N d\8scc IBZ qYxFJQ;} 1tT*RK3gKг\wtwx%&w=afGYԟz;XO[-jvI޹EIVhNc (G#9 ʸ T<ӈ ;dt oDr#e ƃje!2aP )cA3r鮱=cJ9fJģW~n2j^!nx{,tomzXh/]vy(Xtʣ"+F!}R3c9c4Q^QDXNaSL$8 ㅥ @] 7KJFQAiJ,o *oF+Jhq#Dr*}~ 4~f{z!6 OX4ւ#J%2=? W9OHs4/6D\'\>Rb(Elqt9eJ\;Rm2Fv#mf C;`M!eyP'?(RZy+8յwLφ,-.@ "%/Z0M-Ld\qVԹiLqIsFy3T㿎 ZfqmrYܩmO _"iAf.82_U.rT2S 98ųSlUs7DgQG:onfYŜm41j~9 _*/+& QUdI c(a&⼘*|E˂0»Z\łp gѰ=U)q50?5h!:J\o~hN4g(I*k-V\TBq|KLd[NND{E- t)>Y.<hBF)r+B E׉c4 5L$N'/v*LgKVՓƤrU4>LX05وOb]d~~< :a}czWs[[HC<O/F=?Q{3eRsS#}0jAڅè.p_ԔᑂQ>k,^U;:;# ^Gۈ(袭ݵC}qΏ#E^. 8޻ BɱƆE弎ObQs``wI}?39ś+|nH{7ըeÃDmTta _ \QUha>RځFɥJ(R͵zMݸ=w%r\IZ9+nEK7֣n!\XYo9G$p "hi5SF ~CxI "ty _@M>DfM}GcMrg"ө\#[Co F41\)ӹy~1ۇ؁xɂx1dci<ҧHٻ7$W zY#j/jw=/^ASIYUd]T%HP7+yė_dFF0]'>qzyr7ZTO/ER-FI&2ME  cS5GҶJ/U6F)D;MM.x뜡DB0*QYíN*d5qv< [ s<ΩeK?|~?[ښȉPᖌ]u*cLevK*{ɼD`nRvGȍYtf"=L΋b?̥-re]Lv%W[ۛrmj7uzo qQJa:oot{f0ǣͦn-{JM#\iם\4QtWBvMwL/󯐉AlZnmDy~֊s}mar%C Fqbx)%xJk,OIjI1$;]#ݜZzH)F}#kO6) 9\+u`IP>X 2ijMPu¥Ҩ_I$LZ{-v4RSLz.5.'ZZFbu2n7|JiEav+ǹx3Y V,xSMS6hLSoU-2D]ŕ+ޕYZ}ޕ(Xj{W>J0:W(0U3p5+p2FNp J\uP`Dg* *yp4ݕWh*e fBvJwPZNũUDWtE%+1 upԼgWJx* ,Mg*;îLPy`#\N \Mq< ^]GUdZq1Pןg+g(dm: gw\ǟp]9~>?Z)!RZMT2"ҤXJ坊oċǍ+]}^uw6f%|}o$\Uu8HW9?UDYqRmb_^C^x/eX}LٹE/өƟU_#Ok\m?qo'Or$ރ鞫G2`@ ,;ĮaWYgWYJٟ ~;pevzsFF6~/(;c D{[p0O{ ,<|~'V<щ~R2vZ0miC/QKg󁿹/_b?4үx A]|T$&[.h{ cWhsfIMmH9w%X#KcC*1ƦzԔo}c){C9NQ3V}Č=iWZXZF OL3eb9̩5f%|N?Guʹd9J$F (̊hXҔ{;Ul._ph1O&9/l#8ً&UW{}HZm8=kCVle&_2/{ I`mr9^Q1r~ĕ0"*n;BA O놾FG㏣_ePʲLfx8\f-j.um{%b~0p8m7cMޮZUۚZ,ujmgkS-gݙmuue%jIӺCw>oBa߽ô@\:cKAJ*bb%%g tbcON+]=vt~ȦQ <8I#b&E,#H e٨Rb0kLHڅhp`2Qf$Lؕ{+d:qUcK,=qUߕ_e5k4ژ˭qĂrZgK&*922дRلh-#,SuU [#Tc1:᭶&94LhR /EJؖ55qk<xhnAz4}Y樰pKB<tRQ͘T2v1jQ:%ҭ# ÜxQ`q2_lAO*|5[b_Ny3|=;$!85h5hRdDP%<9I#Q!fC30?;OV+1*fBx4X@mĥ[梡&4Lrr&ew*$5,Yֳ5qwJ`N;׆O ܮ\õU؁%6W}Dl^Qk3ߎ 7<pʹqJ`h0S {C?ǬLrvg<_zBO<@$R$uJ+b(&p1;'tOyAL׭yho'FkD)&H$8um:Qg82Qh$FYk}kZ+eM- 6"Em[v]@,![KmxV*A9qøj!uBDL`~.DŽ1,\! )="}q}Nf pS~GzPsf$krKH0poߟ7K8T7A7!&{=zMDa1AӼX-]ƣFS57W4YزčӋ7=Ti:ף TX؀x8W2\\bCW,QuM>p=bΨ,."h^FR?qI/mFї6a wg+?|pTenԘ¬&zVUME yaUs> FCsMsgkw1Yf2zg 52_e^R# Z+ZQϴFDNx4"y;S ~{A-p wͫoNrz0ʦSՙ:FOxεs^)( VZ<}zWzH[kﻥܸMvҾ}=Ik O<['U#}PW'7FV\u( l$t&rӕ,(-;vZg@)A>32ASr2X*@)%/?\j#8`BΪ)|Er%A_PvAP>rZA챣W8Px-hvD0+ɴ"R$ mGEcJ7)=ܾŪ׏Hrj[BDu I@/R6 ΘA0wl+f]au߀2?VU\C6-VmIWRxe64=_akz;4QMкD V*X$!qXo3HWS:{ Yo.5f9f/ 3rnLBI *YEhC+4hG x yl7ʼnt /h28(8N#QvV[8QȄQ9h h&Ύ&MO(D=,qi<^znJy^S}~z-qިD%w1x`FCI+#v%E" N٪5EYxЪ$Ɛ Di|[k/ .iٲ&ξ RpsurR"k\hMXR?&I6 /ʦ qWze9E-ϧWvI$CĿ4@ iFPJ4hٔӿv_bW ^VBSlmeM}_[cݢc3T[tg٘c{hr0Yiq  Gٻ6r$WwK,rٛv` c#x`[-ٖȔ$$-dyŧ3BD&i$O)齔S/>J87'Bu/^%6[Xq{*Ó}^(Ԡ}N 1طA|\!hHHuE3%DUst40+ݡ6Uf}&g]JdCRr_ ndߵZ.1(e3zI @Oߵ RAeR!b\T5pUO7+wt+$i4>?i2X52hȽ m{gz&RmHN?2E*Eu7f.8MǍwnDn:R? ذ!z|]@OTH N366Άa< 6M1BJ4֒w<&N " w Կ$-&Zcb(YS (=Q u)%] %7ѺZg73g oxSf&|kOsJfS[#'ܠ?o̬Y9|p,/sΫ<ؤ+1QfuymsMfTfՃ+'PfG7&]yƶ]>c{̷8ґGηxwsyߢ3n-}tÉs`=YsI~dM\mw鬗ݶ2~8<A xjDq  %s2xy.l: H Niug."1$}?Kb{Yɖ$].!I 5OF"DB;-(N0[Zeؗ| M*4'mk@ Xolg Iz3s6_>>b7~yz~K~~/f(t S3i`j v|b2xU -GDZ:y"\VooZg!SфիitzG2o%n?/~D?p)VG%kH 3 .++y wP9ގVi~hvx }w~x}uy<28{#14q2)dzm$&?_];Migbݕ+yJ.^2˝&gXgӢ<9=|攂WU.nrUݫB˛Nk$c^pE* ~%l2z䤖oLfo1w?_/?X?O^;ݛ~^'nKJUb V<ل<|JܻyͥxKHϲNҍoG?_zr2)A/SP|]uE 0ɣ[}td:JSi%tVX4F,-żlq]gZu::ͽ Au1?U/QڅG9F٘RWU"P'LdKA;䄨JY"(b銲Bx94Yי7&&ڮ;߆M׬`Ր2;J񔺟ԝ0ˢl57Jҝ :w!)rkqdB<zĐnzHYWJ+#cYR2$4e s *ؐFы/s)p>dG2e)J&BF&n()X7+53g3[ЫI:Ԃe; h"SFf:%t2 N@F(+ђdpdxzI)ECe\v_L)CQ"bJi l>9|B+щD0cXB1"*=/HYBWB(^Qd;qrqu<s)QV6bp YRŔH fH9w0h jR_{u|84_(ͻ/tcSg5I(0]$ ]` 9-ՠGS p.i'% l! PƜ(+h\*aQYILy#i&?)Qzas][M nBKf.g\ˋ4x}v|}Z^q@G/*AT6&A+jQ_։CV$C~NW `;UO͠-8prR(0S t9O.@Ƣ+)$D PZ:y%*mMmeVFU̓rN+heMPLQI@Ժ[3s6s9gVmW7FyxvKrtqҳ|GzUMgӊm!r9z eDu HH@ h+d)ho;'He~7}~ U3xx ] T9{M( } B_it>iy{/i4>@]dt)LhȽTYgɇd~j/q } ~u8\n 8r-WI+_Y _ טYVo~tӥf]e›72b@=䞫Ն<aܗLui7i6K^v`)t0', +Pv_q!dqTD}2L,"&1\M !&KXST &e(-UG&!UخwهT: m#h85޷a+EcLL28H>fBEV2YýWlCA0ms*AMo<Od jSLShD6ZT2g@}*R[ iZiuZOzB IyIgLa=0y1.OJɺZlRUt`㕐>kAElZ"*JIu,}*7X\h=J%ZbI3@ KV:MR- n=b3sH\>&~ԑⴛCu6%E/XOu"#";&yxW1HVZ} zjNz=}Czǡwlp~\OK:ȭb2˳<,O" n~|G}e?.Bm':=txrz Sn #)e]dq!q?iRk |evrg(c!?{WV\`!A. ; !٧xϱ0dLh(&aI%Swvu}˯_^s{, $hwnQ̵ѭdĶjУ s{-n1vsP\zxOwQ.m(o#ƜDԌ{Wuѩ s0]Juye&x:y}ž:eyyH-̎,L0N`dNpUu":['['։L`5+S@;"] 짳1t*tl@iHWHW!Z݄ dj"Z&w(Q]"]] sՀ1@+>;] ǹ+U ;] ~2t5*;]<ʙޙcs w˙+|z]|ʳsOO~X: z³N'FNŝwN_?|wr}y{whEo#8Ȍhf̩SZe#t7ۋ]y4۳y;KgQ2s/,:zVO7g򶕟g7\i35꼿]+w.|OѿTeYcTΠaHz,Y#3.FgYκC/w^QF7ge@6;cgLh4EA+ U .#̜syILS3kTS2WbN `Ķ 28i@hр `g/]m4nj\ٖHWCW)%UC4hYtutbc-V?38F;bq^] JI;] 1cn|jtG3xt= +n <h@iHWHWKS2bqja*t5Њw(W6,:Uj'5> ] :vuvJ4HW]MN@푮~7tez8+ wg!{bܩ̱*6hgO\=oMxMʎ7(灤0/ .?>F}2_YwgvKYĄ,vlEV.X$%I}3zB|CͫZͫ|4n˭8(4|ֈ껤zcS /_lh{3n󁛡f(V{T =U&DWa2t5z3t5P_G&teOxC6&6w(=ҕ*`'CWdڝ]m #]]fұyme9\8N~v~>Z &U-oyo޼Y$t? ~o(WoXލǷhu:^^#Y{]n޴!+]rv뚳' E]c|_ gCz_}>ŀ[ړ)7K>kITF BU]c%_s*[iiG}* {w̧ݲfg5x3}VG3 q ] Oe׳ ݴYm,W{/7o%˱ǒ{#]MBfY;ex>&c*0|]@;,7hOg7([}n_fhuy=;헏p9yc-ɚ" LY(;5LU4+zcAmN)6g@BJ9qXRՖlw8`O?E(lH!~^jиxKZ7 a1JZC@F{I6u)dn͉!v-ZHFDZK!Cm41G*r-s#fT \zNJ![-6:UǛ&r|$պ\]l] \m$.CII ҈S3bL"`8\z B0vk=!1f(:&.\LKNP#MHfVݢV4j@B;ʕA҃ cF:dc;B1Oi#1@cTKz?Ob~kb#=b4A=D^^7QL~nW穴Olb-^<fdyKjNcά7!ZICjys-P0-wZ"3 $NރRNPA^^M>,JѻI}S"NTJ ceߢ'.! Z *k|.`5 jk}-8fڬh s UM`1ȓj],Ԑ"![Ӎo&$.zꩻPMɁ1s#Ou-XƕjϒnQB,AF=ԆR3um4&kR@-M(Vց^LM2j3Bv*J5h A@kaFh;^v޵#e5.l5 "('έ TrU|ut%NA2hCgG3cMHmnׁ@K֭P\ZhhŽ{ĺ?s%r Z, Wعz-X#Ei9Q k96PPbjGU((NU%ԓ-eX|.HQc&؎~okҥ:"SX d<$ lBCk#t`AV s(P2w%Cez|=clB1Y \n5PB]0v|ʨ͐jPo]\Q?A 6A0MB‚HPђT;=k eK5g@xkb̂GV cDmeKWg .UAwz  ڦGka$! ƛA , rcV2E$]I*2:DbQz`x`a3{=h5'HiD.0iQym(}cfT:]b]= %&0A)Ms֕(2xhC`V,TGշh"W14o\-lELsS,ی7XktdD(c cQ.) V}L! E=|+h=.a4”A>a-mJH-ReJ2fhNJ@A WR()fu`yU( @[#i)σE8#, 1)YG1z6U3&J/ T "7 תyظUcn}HjoڪОVc Nj\Q tyPVCy>PV%\IM0]A;XNN^RFͨҳ<V[ƾXB|Mw`E]("kmHe`w c8X gSC:VW5|hd(',;Qy|]uUw}3*eO5W'g.\6[9_0uV[8 ;*ܡâ1RUYi0Qg=kU'X` h\ ]Vg b 9Yo,2ȭkx>s <ȸg-J3cD2]( ZV-k$y~kPNp^?7tB S:q]IhU ZC -,ae%:>u@2bi>%7L0C ^cO&GA$%T)C'~eK .DoksfC UUUR" 0ec,Ьo"H*SXrߓ4\F @YFu(?+,;70&0UGU&B?\j:/S9XRf)Jŵ+ݮJ炲1Mi. 8={s0A,REJmVV[M=q67CҥX&3A93tXVi~GFQʓh~1t~ŬϦJ:ϛ v/zpO̿bZt1Y%My6/o46u}[u["ϋa<:ű6H'Bw0W ۡCZm(4%?x%ZT,)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@ǫI dqy?J @ #%1*`*'H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@*6;{TyX@׉(ЁR@ @"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJUAd-:azjP%5B@GD7H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@G,^1U퇶uSMmz}&ׯ??9fw_b2wYpIxoKV.JiHt ¥O7/]!`Еn!1VB?D7֮ P*Έv6P}+De >e+o7tpuo>h;tB+^+]yuyGL:]!JAkW7z9`e7i4nׯLD=xٸm7m55Yj9do/P̦ Zs G1CBC*F_}zmmEYkE&~hw'0$ @m:b: >|pȰdOv2m؜ߕP[hɋN+Ӏ\j.MS|g f3uׇ.Ҿ|ޙ;Tɧ&0@"].߼D~[ a?)qp/[_RJY*xS_RN=k:\&=i6Т .c6ז+/MߖCdv}jv+9 PvA4oJfhqCY ?o( j}cIzAzYkEoKVPͧv|0ҦAN4Yee8ag'W-N.ZD)u[gCi>-:ӛ3Ϧ5hBףY=@a<#}#rComr]8V6%-ƫN{`؜{㦘8ɉdC Q}:I}ŸQ(oJWI$KR  eS.c1^hri**)u6) -`N%uL e>^$tl*nn o8Xj\k{稫6lo]L&0s_XhU^~:^_];tSΧ-"hljy%֧aRuN-㩯>Nӷ'[JaΦ׫B4M"W6ۗ`z`+1mbmVBJ!xye:HVŔWLÁ2Om\lm5% *yws #"H#*#:ܮ5(0ܷy%jtf$x6c_έ AeW )>rɻ|QHV-p۪ÓIP)[| ⍮Gjz sKA[YSBͥ+Z+uZ]],//އ1.Sk,a <-3t2X7tsh >U/?u]mOVWn䛧69.fy=F]^4_uq:/X[(_6j&^. M8:?sˋ????y.ܛhy01 AX ¯w=ok3Q(nU)_,><GMӳ1[rF LRַrMBR\AjhuH} n\Y-:1xZ qCF)tgb ź1cNqq$S)D-dR`r0^*eSI.b.12(L &=Ң[ վ鴙?}{1paY^o/ܳ*=uc2]liU1t2I%]rRLB7zas˾N;oqְ7n o&/vW 1Hf)'+gB6E"x9xJ'A:pnW҇XuVQoyŽH* BbŞĽ ]^a yY1ƷJz.>pUa8sqۃMSf|\4imt!a;|n=kr1TLwU's?KB00 ȱC.}䪌u+ %K{p; {k8wژFi9R)-Tvr5?\nlk&C:f˛a9T^>o˞|`l xnz OyjkڋqhW%5 G]!0ZR1HAZ￵7w/sr6[29PZrX֗w>Ηj/^a6nouu=dO'ewnxYqWڇ<7Nn[a3?_ZF37J#ҢkmF ݱhQhMPE%VbԢz-1|\r%,3 _/?yStU:;@09?8[G(Q%})JO1NOlO*;OQ+"c+1\N@C.WYT2z>}%+b%Y#ftIX}_oSL SV GgTaokgbl).e~mܻGq.o\[i_ShDY5#|U`q3~`W fD{RϝWF<: ӷy놢@W?桂V5ZGغتZ3'[7p=VwW'ίNLa84_,QsdU+> dYl &P՘P#DoL!EPDUŵPG3.[5=]*%V-w\Υ5g_ˉj?^O)}x69bJ2jR(rSX)7Ѣd*)a J$d0U:BKo9ʝաj=k,$Dl<;PKЬ1稲CʂF(yhBr"k]x:3hV&yg|'w '^t~x0u c;¤\C$@!!yps={~z;|Zr! #P #g]HJ0]<9,7+'S(:YDU,J6uN1WBd!W2prJ:{ +tHY|ڐJ|6G &)SvP=vIn#:z't)ۀ&Rk>t6Z=ØN SEqodOS.W+pp@AC Sṉ;s9m@UvlFK `ʞҥJ}o7Pgf*lTF5.;2TpplM%Д{kvv.'6q6=0]9AסLL drn#顇dOzk ve>~w}7V<.)NLtV-R,SXu*Tq54:e.OYiɖê?}yߎwrǠ7~#+jzQVf\|4,X. .џPhdRq&e0)k+v˧k~| ^P疎!"l]G2E]l \dM*ZFJ{gc`ޭ"ji;l<\L.'Zڵ- \~?9ymys'}/O>N&k 3r!?['=d'Y{kٮwy ySe [>;Y-Z\z1.Η R-UkK'PrN`8 (mQ*a6hõ-})U`.xڗˈ}K G֛C|EI4rOGzU}I.KL,P)ـbu)w2{Ȟ28S oFPc.n?q!u.dcE;p/r: ,-t~yyc=H{~~QIJu9ֲƵRMIi0 ʾΉJ;T2z #rڈ) 1X+P(쵷$wTH݆nX-63v_o wnm mჲt_2^s½JZW7Wy_4beb+CUT*J_bc$*k Tx`lRtJ'.²SG[AC#{Fٔ5lZ䁃p;Tk8-vN<mvjMgmF="؍az(ld |H%nʼ5mEg)_MT]}}u>PN4CdAXt%ᢲ7`FXƩ:nn~s XncWh;[D;Z"nFVp*)"AE0p<ɦuZ5Dc[lX}Ж;d-0iRz gE^ux*|HndW:Eh71 "M?*V$2PX07(s)C6hGx(8{cLmw7W| V& T8VнVA9(F2w}5uyGr I6dTb5تd"BƓd RbaP'I)]rHѢP6j]**E.NOztΏ.CKKT'](tW򱒕DJMTՂ*vTvŲ2*﯉_TI&b"t)##ٚ8: 68'LD uU`-]x >̵v/s15]~ w6gF;9 H 8pcnpm> u7,>ӉVXvqQM^fU6 @B>T r6aTRDNoRV:~H.kʹ 4FӮ>&}َ!*B*gJbw}$4q2\~Vb:ET8!vJUЪpk?);6XlʎOlOQ ;pUJaTjy*9XeM%rt  7؀IX*j5&.T-hMB[4-:O GwDyRӹWrvu9=5VNêr}hsO-È"-22EДK01AUh [Rё Z~vU,41X˄kj xM"!㲷$f;pgqOgu]w~{h/]}k bW}%h,_'ý#Uŗ;J|y1#U6t()QWxp<2»Z?wQy3ϛ}Xo hȧH CTȐC X&ț!n ty\vJf}|e'a} ϷgUv<Pj3Q IdYLt T5y.h[P lFra4 `*y`lQq9ee1sХ41Jv8eL(aYe걓,@< Y#+WgJ|>۾T/5EY]3":,=Zãm4Ƥɧm1]lu4ђ/Uea251<g-gyu^܄gCmk C9RAi *JXq<},O/ڼ?}=:';[_9ŢW2u%*  H @n\VA̢/ևb͚})6Ey>@i؞ ^a^6zڅ\c͸ϡըO,M5:9BrB$Vɸ ; bfmh܌==LxzLO;~Go-RC'I؟ﶮ/<>:K/v.$/3Nd-g8i'{}!SҵH wܮڝi+wv8#vcu?>oywO~Ң!=ˤyT{X&]:$[fUER#r._Άѯz3S5gW;h&ܵwWR&"dW Bu~qv{K /ׇ鼭BxW\0)ܸ͐[cG6:t9~4 F%7zKN׳/ǯZ[w_n+dznyۿ]g}Oay"~:N6ٕrjߛi ǝk*ͷl h<@>a73_bIZwW=OWshqg?t]/k!n~)tCէG<_n߭~hz/7 d|,%{.T]έw9b{~|<|H>3:;NǛ#i6o&t]wayLm!B׿>Aw]bt9pV^!1QFhY;DN_=|g]yL7mQך omPd&<6GP+M0?΀2AtjfБ ērqjVӮrg}K|@M\h6LmO(=mZ7RvhmmcCx9.LUڎxRY=ъhVD<ù?n[Bc}o@CS6=+m=*p 6UgRw{LppX";8.(XuGd60PaiB{,k_n[Nq2oVYki:`Qr;ā}bԀA }b̏:4ۨR"jսyFGtg&;p]'Rt%R$\K!Řz!"S&2&H I(7-l:0&AiBg*zª RUkp[nR]]Io#d *قI;+>}Jj癥q+-=z#g 6)ήn-\K]=B%Smʆ*MS5uc+3ȟ XP[Uzs\?vm m4{?hB6=sMCŮ9x#խuߙp/SkM B;ئ=4?+ȝH|=?7IǏۿbGY|&XBiBb_4S%d6 ~ .p6Bȧ>~3 #e+ s"V'⥢K)#] p7\t%ѹulf#ƜfB 0|:C.ZҕPsU Vbt%򙸩+ Ƥ+Xtjt{V=1|`{#auSVOz.ޜ.zzp[³сŅSr-ȯ_͗NT5T -;upj3>ܬ 6~sv*Y5Ų_6BV /i ۛR1ޟKX[fΖ޼xK0S>G ΀ebE^c\,:3<x!럾zPW z/oAl+M3(`Dף+gc1駉橧 j -8JXtG]\PKUOD;(.Sg^^o$Eێ~W49`Ⱥ,:q sMu*PifzZxJ3[L@o`F1N= am r{4ӓŌt%KWxRt])VN¢J9; lt.+e2]QW/`u]E]MPtztpb'W魠 ΝR< =H8vQnFpwO>f77N<=v*8JLl 0Bӡh'1|RaE'Բ$|R7H[q ?)p.O?5]firJltlSוR[t5C]9Onjt%;+ťltϽu%ք }J)lt% &]) J)]L(@H8`OJqN.<6uf+ȘPg] TB/z],4Ԯ1+};Oҕ:J]WJYfRWttD@ 7+%9*F$iJٺltHJiO]WBMztxbUbK^@8%{|9F)Mbє,YUKlcQLb(` S#] p כ\tS%2Ef+g0FHWΐAFW.])m4J(Ģ MpSgPq-pEWJKJ)j"9ue+ŵ6])K3Xt5G]5.+LJp\t+%R u孃ߍJqm6CJ0u])ej^DW2HWۏ`lt%ɏ] ue"u`NAϓAŝ:(Z٢ע+AUhNC6/ j7DW#)! uu{laU$)DomD޵oL!-{%W4I0Aˈ .0Ai'J;2C%na`Ez1'] p ts7r.h sR]{HW1d+eEWJmPt5C]!1#] p0DWkBRP+".ΠGFW;|푴6J)]t[FXm|,])n&Zɏ])VԢ3`cRʭ \棫P[gѕ+M"P2ar .#] 0d4ApqJi+$_tjt{V= $kkX{JtIwQn͞$hYMOPW0[痠e\&e\1A˸ZTj9-cd!䣀z8\N|$%%6FzdҕJp\tmr9j^z>#] p4b.R+,\!t%m6R\\tSוRFWt5C]a`FFR`OgPqC6cWB mr9]QW"cNAO DWJ뒏 RWm49J9d+ .]qt$])-cWsԕjW`'R\6$dP(ј*f+HW6u]]=ޞc]US?fw#iaq6\hG]\.jdb"`ĝFi;|2Y MV%`eIYo`Q /˱ŝzq!A(mj2%lx,}] 0Yׅ\tSוR˙&#] 0:FWDWJ˔҇ CYGFWM6ѕZL]WJZƒA03ҕ ҕZ\tRוR-؂!)d+ So506rcWle}i] ,9\tؕRRjŐPT\P+R]=>F$~|mfӽ]Z=Ňs-vbPKV_߯t?H'k)Wu{VGiZ}zN(߿Զtv#Wc*zԿ˶BW7/ƼJjysy-esgwmq >۳_w|㓃хn>U̽KzX7wu ns.ʕ܈{q[E4ŻZj޼|-U7jJe,:GN a GpXpn>OS>RwTwwRmG^HZBP/경~S ~_ok OҿB[;g) =pp]M4=4e8-G!x[Y#Y:in}^oΖrs?kW%Ő.;\5D؇q@oXri!t큢5@l]#mS7w9PΙ3%OMٶsvpr`8xqRwvlЄrG}XA(un#{i,|3նL¾k5 ;i106 }Bm0 ZrMJeD@J1pCՑ坴rj؆^{BB]miڮ@x@ F")=Fޘ!F&i9iuіCbjtؤ5K9PQ΍kZ; ]#ؚ>ġmڰ rJg'Dgb |ZC:mVH'qN 6@dr*j]v_@2&Zn kBhclu+yX QNдYe}]Q=s%xYPUS`HJƬC6,X9  "ٻVLtzRcSh'[ZpUTiXYi)@fT `6+(\8%X( ꛐIv4U2T j,K8JH&{J V+dWVb@,AnTzCZq2Fͷ6Q)j zN ED&TDE"3|5gAx{b̂Gܤ ̈́#Bl coSL3H ̚` UR r ֑tA@GJ LEsf+%RI9n/XT5o&mY#<;"JPAw/uU qj2 R+a/uYոIUDI)e+%:gd0[Qзz $$dA*zjJ YV+"1۽@VQ=r+ZQCk>84iA#aPhrDC{يq)4cMDr|T1&PT1yQHa8IB1M`!vn1PNכv6д^-"T]Z01սu mz`-$t>%@uP< *}t4+&{H\!iZTUFB1L: !'`Ge&:#.3(Z R|$ LjyU(C Y0~ƣi(^Bdg{{. O&!z*4}4 ]{ #BK|u)}=ŤuԆHT—H[QK|̡LGuu $$LEQYf}Jh v% Aڱ", "+P(v5$BjF,z,Z{65^ PФ td!.h]`mBgRHQMfWbd@BUq@ơ"2ΪU%? ʰ"  De#@ A16%VmZB dtXI3,I£j4ZIP*HxnQڀJMKKKoU4^"{BZF7i QH6 W|ZR0- ڪ6t*%ns^ X4 2z:m2I4 n=CnNN44z63 >4{K(v%[Y:64k*ZSRP'ՓFCkׄ bL $foʌؓUAAE!9)lz@'#U8(tP"˕,.TP=`ePˌԠ*1#K[MJ;HUߴYaKl+TOۉ+ IISL1. [`' += bp0) RT XQTbwv\:%XW (]IڈJ5(Z54)hc#7 _ E5iփ*M3| R5Z3i^& &ci@Zv!; tsr:-秝 v)D oZ 05^%pgV kJV@P8U8P(-l5EO(IѺO34%0U2'IkOV(T %׆&I@ b~q?XoCnҘM5Uq4DX%d,N f6N J4;,*ENJ|-v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';~N osrљ@kl@@@';(J!"; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@ zTZՏgՔz^߮omZ]o/tsHDs2.pFиh1;q1A.[3"n6>KBC+B  ^ /iDaqqJ](&=^|uMw{BcxW_.V[QXVZ(>f!bէ?Jo'㍊1tZ8^9_$t/)((TI;xil6{~p7LOQO-f_ 1*uO9VrBJv(xҷ4WrJh4)])Ȫj.tEh?P^}`zLW27zs(Wz<-uY: P2 oߴrYYkur]@%:fs,|9_<;]y x='ʒn-vD'֡I)#uFd aOp#5O/&_A}oч==E1nhV'7m=`ھxfލO=Wz[rj;m^"pyF_/^8ePw6׃YX6\\n(JN=alx,1D{eZ|GhC/uGp?h-w\iߵͳWt6h{gM5vC#bhႿe~*Z}Sa|o׉xbvhk*v(}XbBk:k{Bcz>4 >-nYg~^m wwqݤ.u7A^K%+C'yzקO['j9"\? ްmd-7T葥olmɸ%'g$wp ;!&6okm#G_Ewmego &X AG1"K=qݯzjI)[8v7EbW*lжӤء0y[a|8 sj}\po$2rdƳe2_+iot=ݝe3.y VmE-j ~5)s' )ɜ5ZDp"QPg&(Gfo ͢KW$fLN_q}AUet4Ѓh63yOuX_R *@  58juGqTQ:Ԛ EY(D5u18p{cΠvʹI{$hIbVץ `) &lcYb[iDjwsiYpYTt!JG^%*׊"XC IdΘE/c4Q$>D:N|QXsc+0;"(fi V-JІit҆ 'BԆ(ո"Be^Z(sq]5Ce<(/MǣX>obGcS0 O,UZᴆ(T\8Dd3? 3?ִ>' 73V8m1;H[/ q:H 7.B*j.9CG숵kgNQĔ$x/?%/R:c30נVs&epm-5{, 7 ?0(UY}o6̦l w댾6H0$Oڻ)ʋarUgoM 7Gyzx7jN&ȱRڜtT'T6{s7o7O~xo͇߾]8S1\tȣIQnjMoӕmu}\zV9~aKFfa>\ V=78Rw%Ut HĀ({.%e=ղd;vXapCVh3M06]:Xs ٪#I-$兎,1EQ8eI'"() `.a1ʀL`쳞Dp]LOʄ:'bD& J`Z@ЄdE n(i L"Ns Lg7d]a>P'l doKW69ٜ=t CN';N`y< 6+\wp҄dd6s$S%v*TFDV 0B* 7J =%IKTt ZŸ}1ʨwq!/Q4~0ʁ2&F9-(Qu a :$IQٰ6 HJHִIJ{PEф'V'#g7ND ]IRy#Y3eT[#Z58GhQ&2e|!3AL#)17?^GyX`IKz՛ 덽] z?\%J2d5u`ΓؿΗЏF(@ 1P ༷T ,NH]rgW3U@#g6@ )XcJn_H00nnjȜP" sj,'ul2|n+KekL3DZ9^QT.+ჭ %+IR5B7t>;я&KHd̲ yj2A3>39>dTCq{D!63r&0BrHVo%IL3Nv+gTџSQ_pd}FJa 寯!8MdgӵfW >-{*;o#aBp*v&s){T(n'St| oz)ze-uino"6k'9Ս˪s rﰏBZeKWMp';*]GKVXjuj zwWQJ07%e}x~.K)|tGm&j^oYzlTe"8e|l,a-tFU[<@MѦ5p>)kwTڑ̆JʏTXȵ D3*F攝d;&֣{V^5Q!JF@41)9:=&jˈ2Lhex.tQ{P7@ :-QJ!x#0/l/*FHt*KkoAosudLyƏu6(nOT vŸ{lWx'r/8$GZ-x9si <^}G ($EЇèz&ST 4%gFYrDu TۨАHkQXj֜I-$gJ:AD08KNPJ{#g78B8ڡh4\>Q6F5lX;uOi&WY qOAEoC}t:;G$jj\-=(&SRoۢ߈BV.7j0x*g Ůi9YyWopELFk6<+#)iX"MOap멀T@l奷Ub `Ίcǻ}Fnua9}1Etl@́ZZˀQ̎*(QD5bģP%ySjmɎMnv ػS^D RCo0ʻ@}a۔/'D#/zO ײ?{hk꛹peinK}{%ߏuZz/%65wIk9^uWkx',},а*c?Cgz/(M&ؐ&kKaRzI3+<{aac0GRta7WT/^9rJ(pJV&Q)Kc%%KO(ai܍ IjD$fm9d5k,jTAVJAc9th_>\L䧏/ǧQ*mV银€܀9Jѐe2h%{ A e,S)Hv޴S+ 2?KXEAkML(SY >kW:Ԣa|7$.@,'^&X.ၑOzbRyO 60F,ŠA:fP fpQ:#X޼uE%2:Ř'N2=ry,:&%4DEw Ғ9%c9RL㌃d!%ZR.4,<,\222pi^;7*7 o\bIDA$%C,'<_"[xRLKkF[IZV  O!+{.2½geU_镈٨vĄaU HЦ􅌥-hWTSŸP KmvgիؠB 5(q^$9%H*W\[ Z2qD%K2}J5dDT8h5HP #dHgD:I@sa1rvZfx**ya,q<#F8HD',"ϴ>ٜYT#Teo"@m1^'8IiʣNH@Qr S͝F?naX=,ڣ\oT:%[g1.9T.rQY.n1GDxGK 3X`ICD ČJEޗ$r^M-?cSŸTAuGa|࿩׶9Y[X)ٱs3mg_ H FPV"V@)M2T* j$ФYJg=RvؼL~wg)k{ŏ426@{VaIQZ#0.\k/Y/}g¸?3*u Ue҈sh@m%խ3C?\Z`nN'AtT{P8޳h,$*H(S$.M"9p(Ɋ柙L=[>5٫7d?2ͦ6Ob\Zj)KQwu'3Z3iA&FI^Egm8 SH~ 9Abg/$K$OpHQ5((-4Ux_gĎSRw;%wOԼ?H zwx{뙔.+5+]10 iRyޑEUXLV^@ ]c-QϤwygdGջV_[ Bt^#iaG-YH 6zTI RbJdޱNžf^Q#Wyp)_ @rF(!LGN#[ޮ_qGWj96rwG2kt$gg4me&uDVvJ+L^' Ho]"`^JtL$=* @,U6I @L6^qàɵ.sq>3xT3FeD!z@xЌPc ,Nd$Πꩾ3ȧ!Xkhv[ CPԵfw7HȎ.x!v)E=4{l\##1jUXs" bƫez>2/s"*US6pz6~6SByWt%o]&ΰeυ4\TiUӶ1ǗWx%0}cL/ɋx'tv.kNҬ5:W9V-TcRͫ;;^?2s4朮D4UYGՏϛu3tC1&:bXAY6"U .CtO-q=Gb*/xpiD8=_>bRw":z[~9X5MoQ}vѧXF_.:5ah<8h7 ->a%n o1&)&%w9sD;:lIpӂa+{~Kw6B$Gߌ0X2u>}=vz@t{0 ѿ|bt̨XhUsi](͗S"_oT4())PRe8(gi~S9U-w7nboF4~6^47wm0lTR5#[Ԗy˃K֖߱1SNV\ ŽV嬯{v6n[Ɠ_Znp^|d&G<5Ԏڌ~,'1q;mg,t&vUt-T );ݚSVDBݹI|P&&)xGZzy^1p@GTkkxւ`*K*EXo0`>ݧ { =ց3`<]b]Fa43X|gĂ`eFV!HÌ4gz0!{U#3رFjY5Rwxv,=z )JM{R_ 83L/3W^MJEkU_Y<^G@ԑGLVaSoIk"Qpj?jk+Ə:y@l݁a/?uk.Sez71r#^V/:1'Ogt*d@JE&r` 9L w./}uYx\j 3[݉3r4tJqť~~Ϥk˴U&^t{%Hޓꢵ5ɱP{HmJR;K{^Hv轒t.*j#*&#V IBFf!,)rBB&$br)(o-'Xm`A{*)hC'-cZ{9R- L 3pK 'Q`}V f"@QFR.=#`N6h-T$XXCM²'D RD eSQT3K)C83EzԤMdyIܗТd/(:Cc<0r1;43*QMq Vtƃ۴M0}FtL9Ҡ @=A>ѺEMGXQ=o5Q[rxO/o, % :.ٶqZaZ y[Mc.[Tlkc釯F}E5HkgZQ* @0\- U{cw4KU9U!0X @xNYOq\ˬRauW0aFZ9SnXH}_jxoyݛyN8;8tl0[ܪ[ߡ6t:"Ste`]% QQZy`c.1a5qE-՜bj,UuӴ ^e;0<>טR 췕WF>ͺ}V.k?]Zb-mcDZ&1!q>/iQj.y/ h^{a=;9uuartUMGɒ;賒>מfzV}bT)HJ YEJaI倀 8L^uVJtgy{:2߇k}ɍ^0q) xtFRu7v޿7YP^SRďP\s3iRW&R-gYAiڗ4a0TP$?!ka=zQL:0"@vr >+ifp08& \L`t$ `u26a9K͌Jq! ȲRdC2e sg Yxohd Xgl6_BھvIw.f훺&;6 CW,_LS2 ' WPgemL5k :{xP )P eIZ'0x$ Nf۔#o3iɢq>*A% ]%7˘þĺ ۘeѺT`"9B|Dqm4k)H'L DYп/Koe8I>`^rD#t$`Sy M̗ĽzBꖐv)DLiv.С)!"O25O(`K7IDF^ r{i?i Wl&abXBpfcr$$՟rkˠB 1b{V?45,46zI7d-vi nI(Fq$ 8~`:#& \ݾ[jrpc`,y$5fl\Z?цq4 (f1Ŷ\K!S5ƆD;nW;_ÉgƇjg%k l5r1ͥ\$|Aw)]!ir|)J5}ueMj>Lu1;m|Y!_SeHiۢtEX-Ĺ~HHLrz Ir8|f8|fmFqWœI>>V] M~h_ɪqk`MM655}YW4cL[E-XZղ/|Vu}oVYl:/#ŊoYs9'G%|¿ieڇ׵塚tƓ(ߞ:}~]W_}W\£YO)>4x6ǞWoӵiae^яouI؆ )'Wm\@!8[HrҽUưxDO&k5²A$싅F{֔?ք a@_VK#9}>Μ~?IIڃ" !v:9m甃\Tɛ$t 3k{(4.M;z>So+!c̓& :i XP $f"0Ih9& ͘úNuy|{,tHHXwuˠ׾蒾V ;=@8RY㬄< D@Bѐ Fh| ֣#OIRX`=r&1ܻ$(a@vI#QPn0$pWI& ܰ_Qq|~pڔٙ>hSjЦ`Ժ*) #ALn^c`>T":ijm 9qڈE%^h'nw@)Oh-",m80P"_/sI\'EXYgA\ԠIɥU#)E\ 29rYvmjqӠϣS8`jɸcu`~ Oo-ddLߎi&/}%]g'YX/iv Rf\\J3+Q'# 'l&YŲNH.NY^RlGBymhptb;o,tj>' ,93]d􄶳G87BRa/]tU1eGb%/$߸2dԊ1^J Esl8k&͚V"HPЮ4o]q?< Nū2NW7tuꦻz9']]cooteCW6 mc5'OxTwxڕ0?oyv{J7Eɾ-JvxH|zKJ5Y>K47yPgC)w֡M^ Gw3ceήwouTɬltVLҍ9kdCXaA|jE~Ќ3. 9+ h{p,4"2; ^05jk"ݢ2e 9b@}*1YREg}*P;p*R2WUgwlaf砯=R1No%CԈpZ0&^^>^$x=6U7g^}<^SNaj@'׀Ԫ@3bN}(ʎ|Ы9"z9ʨR0MX4Q:aF&mPh% |Xc cJ]sV峱@,YԚPhEhF.K!XQ8@mHJ*ue1&' < CND=x0 j5qvz uңWKO1n<2we_0RUJrNc+e71hǸ 44oR+jrL&Q1f*G@!LO %fb^D" Τ\&#U2VgeUj--?8;Gl;yDŽ<,ʡonƝMߟ>)."EP-#ɒVɲusq11qyYu7@P= )ٔ{ M&ߎlsH7gI\s1jWӎ}l`E[! 9`CVQrF4r^Kc3Jc: ]V!yȄ 9d2dkbbA$0hUddlN}֛d|,~kee(8Xm/QȒrAƼ0R.#^l 㴖9t6uwY1[,@r2 w'1N0\t$>Ԧ&n4QW{e1:iɾvQU0.nP1""e"  Qk3 0n`aVӎcj?{xVq;XlK'Qp);TolV} HF.$y<M*lDCad)EG{\l6v73Bv6p$y~|Z ^!e2n+A̅He.x4yYz%Mt :r/Ʌ.l_( OQ~wuz-z/A;nrr Vh-cP\0+QZa ])st685x%y|tdߝj' .#0/P+fA4n/~@~?Rbm~F~]n2SSЏ糫,XW4(}9K?|{/i9J9عE+f$+o㬝|n>{g^W:BV:PAO3Itn]\z_MXt]#KOVxDPk;u;c?_-_QJ{..g n2QvtUYH ߭BaLJ_H;jlߍKgٿ%Ӳ~AsNN['yfi)}/kM 3PRSdNd~9$Ѱ<:[ Sb/Y#eRTO`ngz/BΘ u)s0i󎫨N>HRaf!aZ=<8hm145sn'ի@ƸطT[.2 sf!XcD2!6B٤a`xT5ӯ =A7!FH1"2J~mMҬYq^P$iY e_"?E?4X`Z7p#UqOJ}Q!llhz}+DɓR 7!6L5>k\ `6;[[_+pϋ 'N!9Jc!F@HcP,MFe< Fm*$'8S7{$"灁hM;O୍a)"!&y<7;6on-qZ: gͱ/43nQ&|*y >Q\q@G }0ttSN6m>FQ ɾ6J5dpL[j(7GG\Q7@:YfvTVAHJڵI׭] dC`RfGJxGc$N  Ev]Q5qv#_@]O]V#ژaeo!iG!P}t:+&.jҥfFʁਔo@o_Ci>_dGw0@7 voɿKw0F|z͢>A9jPy@u|_ר_"M6^yd &сCKybrS_@, U hF$ e,ZIk}l“-76pi6m;Y_Nz7Mg>Φei2ʔ愘)_F7Fnm{{鷯뻾d~ҳťHy/ _Csҍ{/vB@1:tJe t-ma#A=/`D>A>Ʋ4Y(\"\,aYx < \Qka>= pV.T6>S`^2g@yN^VR4Cϔa {aXY!xҪK[Ju-9g=ȉAZhg4web29ĺDyDN8 eps4hȘ3#RĠ>czr{EyS@vR D63I.r٧ ֙(B eLEo&O15{R~xJ?{ȭ0O'@:x3=@vs dHegfV˖/<2e$cԦUȪ,D8HfՐS4uܿaP޴ٝ YKAFJ*qnbMm{fؘ{/UʃƱ? =vؐKt?eH)CCanțMIt90ù:wH׿uN`V4~ {ǰu3wǡk ti%5H<;UpBԑFdCgj\Gɒ,8:WQT_T_"jP 0f ,l;,gTF0Lua85O&ڇ`S6i>_K2u֫fv6_]Xk!Y瞯)1&H̃ \S.Vh0ZB)S^N_ qixtrk<0&[r@/[6'ύB*#Pkۂ^_fygGk_oEkg-[!|ȲpQEģiXUX/i{"3C7xQ>iv;vI q>Vb((d4 6Fg?2xz^w^ +;䡶ێio٨gz3@+\yz3 dl66p+6: |iT9펙G`#Xd,H痫o? "ڲ>df?mv8zZ΋h#3NYf2r11 2gֵA˝0 CZ8$tBftGa@&1y O2>b1,asu.4'p=B#}A '$gN \T(д2b>,]f.1GƔdÿaW8"[ hx6w9HW2uknn[ʝqIwv8Ctz3t3CɖkIzګi6ͳLDH w4o|_zNjnɍF?ǔ^|ǫ2;Wmn?ub+ׯ4g[ /iĶE/۴l>~7]e]y}/$o{=rx1z6o^ٽ2۳{o`wYrZ<^X"ޏ+sڟOl#9yㇿ/\s}>翦/am~޾.~ avqvq<_.f O'J]稸 ?liS*OEϚC3yahlgOgO<˙c1꾩<ڕU :h)/ITnc.Ӆ_,RwMsE221 r,Bbn 9ۍcNM;Ƽ}t[@݅ &+ ][Sw Fo7bK5vBO-_ ٻ/lfݩ!j` ;4ЬwE[VWI|ouKUh5yv8T.щk=&b}洰ETY_<߮Sx IT)j2W6Q:<"K&9DjiKkr ϱ鼺W`ohýako8u{2Į/҃um@Z|+įXcMI4 YlQР~ X ZRlL+6p#C[nAwlhtAϓ#ka)\r`.u(ۢMa2U(<ӍV6YhaY78POq9O 1PùVIL@@^)c,]^*Ņʣt>E &ΐ  W?7_3} xD 2CHbR>1[Id p(Yyfc 4}YBP6ݻQ6Y,u?cB LUOWv/D6e#SC9qFπ_v] {N+)NwR-~>Fyr*(& VB$ 3!l3eF!c> -1}DNZXqҊ`I+R VT? >.OZ>,IP|prRv:V}N^wB@U1ӟ d6-YY(pDmuM.ZGu22S\g&%V0,w)\>L$;  ]l8 }"ד,PM N 8ɅHJ-E9Bi/YS<f*V58\dIe+UL^ dA` XPPE{=*Ga<_ͿKY*SV|ʋmPq?mp1=8NvNcR T??,1qN6ΕC!x 94`(AXZT{DBTނ(uY˴54 *OYrAГТU&fD)ڜ5R8R i^Xx͜V # I{MzyU< J a_x9YlLf'+GlPD\lLJ9/y9Knh`LGb<ur@P=!) lJmPCK3\)ce.Sfx ٍa6k!f_Pvڢ2jv[KњX<1|OF["hi\bF r"d溱}i1XŁ"d 9P211" G(zÇHr2:+a5qvaԯ' 0Ǿ(+#q@m//mxK9P) CvZ*eN1 : {cPuR p P8I8H I'͍L"iFzCP?'zԁpqj&ѧjZ/.ʸ.b@ -|"x"Ȇ+&|)4}j}Mp>BڣerkM7n,WJ<'>"Kr{?7=lJ;Շ{;Y{U- uo utao}qtBn\C!HО|b]އىCq) sHK186㇑չtdpdGG2!y0IK9_ǢxgkbIdR䴧,CZRtjCW9t!&Hn.e2Y ๊PzXfpi%Kbjx)ٿZs];5kޕoovQtJ* \|W:ṽ>6BDnh* X{NY;%u)zOg=q&{^(*^=IvJq>F x>6"eK:Bqyܥwtܵ,o"#*ydHQX fuR$)kY1%2XUYvgȌ4aQ{ 8FrLZJŢnVUŢ]l2ƯkۧKN$cf²φl_qp򸻡l(-g1x)`aZi|1fFN>)Ŭ#0] xi78Yo` { !{TID޹!O1Pst,N6&#( Rr$OhyAg 7X> h LP̨$/;,M5 7 oPWQJc=?{Wȱ~JG%`$Kly|pnIjS=R$:yc9SSU:W64JVf:B k2N賊xR;谨|\5QϾC1$BG%LcA[Dǰ5& 4òK_zm{; &,̫o/n_6 ݁,AY쿅tZyɒR [[0Ud}4(*EUnif9(mKX RdD2qKAFmv)$7$Izm!}u fr4ڬ:njQ MЅ' nVK/Zon}=ikc~֚;=:dٽr|B'HүeeL-9{pԃG\Qh7@.fMsSy!N" ֝#n K1IE+Q BPRhό@HJ=p]t^(}9T/m,;"iKQk?>5uuT&>iҥfFH?Uۉ44FÖZCvx{eTݰrn}L HK^LB.܅Z+cҩ7H#10-1mr-yU[2ri<קeTNPzԉC Bz"d{c2zG-ysR;{5Re۫_ƟG[2 n߸q+"kgWQ[,ݔ\PX8B,RRڞ%ea^BY{"0}z*I8+*˄EP2@hA313}Uw)W+mGNHy.`鸮eA!gLxHFFkW6y!&[QZNxoC$zx9뼓1͜粍IJE&lW9CMEڐQ{Ug"Ì$ǐR.iU,jI*T_/,>U>=(">bDDxJ; 9XNqF)281!S|_78j2&|2zN]1ȡU({%6CqslM/k84/щ.|sd~ˬR]u90wdէ^kv`kv ۽mnopzpۡk FtŶGZτb=Y{+)\p+DypfdMdqڥjz,Yɂߞ6{ՁעUDPUf dᕖ8s9K[̓ .}}48$G8i>LZ"?vβKM3Yhc/gˢԭ:sֹKL `"%>KFZ'! VP%w[q8yBoˁ lQ؜<7FZzw]+rvK8tl*dj<D4\>n\'㒞2pIfq]^Řvn{1***.K:J\Z/M](҈f h~HNji茦wY{?ub784ɖk i̗8@,Uy5': tu~zvX#y_uظvvؖ8r@U#\r`+Fm2w$jmrF_>~7?xSG?5}Y8;??IsΎHn(}W'ƟI\xv˃{gU`^i[(_ ݽ `rjh+A;K < -;_<}ϛIjg{<-3#tgZm=Zslzih㚛cK553I8a0ZV2Aj[b4&Mx+w`;k.Qӡzˍp{n]kr! @xc".)mTA:apPPV,ph2Ž2\[ñx;3xVYd(;zgt7!E__MG[Ol ep"8sQM̃/mɒgPJZѠ4XJW8i=Kɡ7X#s=MzW v>`*۸3֓ltJVXڪja}%d .9cֶA3E\"2e4:qM@ϜVsGI -dx߽JVħ~)rՋΕCvNE-$RѦI\}DF'*8Lr<o:t3Bu^0CNESo[zlW5S+۵P& i,o 5omcoo,z˖!o߼#?/2Mŀ>_G{'ݘg峋izp~GvYqTniNrP$]G"gAʄGc9Yok6:Q#H3Hnv覭}Kk1ɫinBL.; ЂeNwCͧ6NjqM /jyZ4/B Xa nV^EcvZ[EUč2u:DzyZ`_W\kӶSJ-ZPϧaFBE5Hkgh(i`YRx@6AN<}'xYz./ΖTr7Hj3 j?yI#u"B|5=hd\U]ido0\Dt6G$K 2Z%3 BmNZ{ŲtVR).T)T0-uLzY`wZ"C k~g_B/G܃OH3M:L;F<Y9Y2\aN 製FBIymAZGhmzDAn ͡0 w8!Le;>ꝩ8>RĜ )%. M`B@*阄r>u xEuFV|D{BkZRhm=t]ߢ[._N2_jkٸEgu;.(R0H@8˔l1!J=!x<+:bqW"-t5^H+-YeINtG%37[,:2Z^pnH"'AH,z uW"=M3<|/wa2ZQ!{9׊ ut1eM3=? ϾR9=?tBVhAD#t %4FSRddtN_lx9!T6|ɾRޕ#yN>R V|99ͣ2|ŸѴڇ.HmaIwzoO9z}巯_x/_qa^^-:Hd& 0O?}7]F(k~Uco'<=WKVv*ܸr  2eA$ڥ}磰xDO&kU²BB-`r=vV8 1!2s` ٚ'n8 =(rQ18SQEpN9 "@P&!T-⤇pX:=iaJsLQVffLeu0I )h bF(AR&3ZA|XשsT QĞ Q2D-W;GϹ$潬^ԝI1bԝ8-p;+8 ;FY,3Mi ѐFYʇIIRGR%eKnhC.8 ~3$PW.1A@?9+xTT$ UC7k .} |6"̛SOggx6 `uUS1@ǁ1o;<'?A/EtK76]jlЄ83鴤xyxi^];܇{sXށ?UVs(P֜(`r8謳 )DvUNɥ* Ì7ܦ=pcp[d,F39rYQ{t5sv\@Z)pZSuSF؏'a2gEF-_|v2_f̞'eo?5K|zbz%\Qf\6m gIVIj);} =[^V=~'hQ-ѥ"!{ĶX|r,v'{{ .NEG_u[?v<A N(CZ3K+Z#mQ%([,n%>+\[ϊF>ԥIJlg 0.Fy 8n؈)OΎ|w/æ#J-zpgFgg!YH &z0х[VwmtY``]Dփ!rcJ0UgPh2qg5sӳq:R.%Up](vF2j2b>EfٹlE.ȵ#'PaCZw:VzӞB(0*IGshB,ЈB6HnC3#cѱ ,~;펾m]o{2h|@+Q9;$V ;"vU#ې+l~`EU4t1yC1[C0H x0tQN<ڼ}WG)+_ʾdp r&4p `9*>25'cMA4Yf#:x* 0he(ًup,1:eyFJ.3!+sA쀻P9(Ew=ۻ&3r;I p[e?1\jaTB M>SY w&ɔdt)-R.i ÍRzU:}!mڠfzɖŪk]jIz>]U}Cj.!heM R028˓6BLԧs&Le4`G%5Z >WqJHv AEb}M(GOӤe*C{bjFwS%od~C?TlO[籖YJ+^Љ{i*O5IkdBe}n z,盵U6Yޤ<]7lAxj[v;{$gE$.vK]7g^}<2&422;pdKdCqH;>+DMF+ydEC  K*SWl`wĿqI3hf޼~:\Cm=Z*(XCVQQe4" RźIJJ6Eϑz@!l$x)&' *g@[gU39=+3hLE68A Kl.E38"r\&'U3V3gjX/2Dz pG_F~qYq wwrG^<,hƝMߟ3=6S\(rDZ6J&JZ' , 29/cqEYu@P= )۔GZب3v,&C,hS[O9=NJy,^Zk^[ ^{@ Ek"9h y*J£618Y nljQ(B&dȁ &_ (0 8G v?fv˜#VkzDY##qS( Ad[rU25 >[J̉B7u]M=R:}a_CYO 4 DYHH qpRs+xx,Z8.ObR۔7d?~cR)yYW,d}G(FM736dF%yH{zvRkd}zs&Ggkۯ/ͦݥxFiqjhي{F!R}s}۬/zZ 0G[װm4D56EPfH- JZP.ѭ]#ZP(ڲgp eI=6gI8SX Qp x V[.;M;U<|qvsᲀZu{鞋d9T?:W}Ftl'FGWٵE%l@ٻ6U+A';Th H ȜIa A 9Ke9;p0G$3+8A$/wݭ)eMݲyA >c9a=yႇrXN97Zss0T[a[=ӌc@R@M&9 vp< Y+z:$ HI T.FyJ+rh!,Q0sjVDtiGdn}?N:8}fsD) 'H$tm:B{FML JsfLb׷70*Xó6CCۈz(j[S NdKx !it QNCp1r \࿂{"PfωXOJK$Qԉұxn"D K:!r"&0 / 1a BWHОx:s9j֟W߲(uB͉"*O 񠹸ٺ.FPy"RŻf8/|y\ /#T͋]o5˞:ppp‘8Πܘ=+\NI73[i&޶AqX؀ iO~=gK+ `j `lwKEoQf.. 佌B3I/U}CH3K{gO$i<G#oy}gyi`Np*k6?p7G!||`ii~Zv^|8EVoI2?mp0OgWv.^^ÿQu ߞ O?z/o ??;ϋ0E)2L}u4'C ~/ިh<(D{FU 渜vf.7 o BШon(jԹr5ޱ`6|l m5 LRS~o^j5/-a6=wR/]{i^ox\nXuӮbIYDaZ?oJN7o*tw/:$S?`|XfUyvFe5D-h!A" x R)gLTP+ӶVM֨? r,T2ʺpGOݬt3F` @͔Pp6o} c>Zt߸qިD%w1x`FEI+#w%E"ʤ*5EZxP$Ɛ Di^lAZ# R2'p娘E8+.yi<^&I6 /ʦ (BurfA8kB™渽kp%B<9  \  9݅5";˅FKT:Mk dO}k.6M_|5o ny3=aRF8XEUhۓ 'rA ȒiKĕ:V:ӹNCڱ\k[ʯ3A }Hi|!$lt<A4DsebI(PzYȠNց}sj(҃Kk)h]bz.!)xDmr4mH\kJ=!&ji.*`ϵ%с>ipTȬn-zhS\gX~Kn2XYeSlĜ@(rRI~icݫ?O5L}TnǦOYkn05ٱ}bSPjQ Cv 7B_1 &U줏TWr8zuVW*^Ȼex>mmk+zzn${7*0OݼYT}MeF)Ϛ;?g{/j~U3x0>폅oR W)="gI7Ï(E.^p%dլOw{ 0 Qe%$K)>EJThȵ q v'\i~n2AkBzY*H2ћSQtdsGPn%O2T;Jz4`.CGXtt0 O/{(rk7oˏ+Z0 演IUB$5DC\LnɂԦ8^96^3v ^Fe<§$i-x|$\@dԚup)4*Gĝ<qd Tk9= Tx㩎F&|<--XiD6ەmrk(\>7d?}z5j˷7Րuq7Uh{j4\Sf `nY ëτW5\&%&2ZfJd_[JitI:^*9qK3Aa~%0%4]"5n5FbwPR5RG#9NSB0Q80o<Bviʞ&M*k^Jv΃RXtKDZ1"x8cʱm2FsZQAi4Ee*iъR-2B\dP q:&"8_S~fexA' vG s@G`8Dt7MDž-gY]cK;Ku$:9܀E:ps'X{0{Wx|i)IBv?%'RZym20WQ.YVWs]u KCNqJt_p^?&y1d[Zbޏ)#J_Rcqȍ23LF=˶Ikꮩa~$;if#8/nmvIvQFPNɈǽSOd\d2E!xulגD 7ZOHt,|Rłpt"8f)І [ Bu!^#[ fpfk$ٍUT23 (uNNM0B4 VbeLMnkkeCu?fgz㸕_r1/E9NNrX'؇M`o`ihd;b{.I3~Ep~*9[#.9?&llەV./i}V?B9vDY6ۣWf?I~~ *JH)d3Y-mRŒ (v&=Iia.]o= ij @!0Efk]D$ G'DPN4ͯ+v0]QV/}&:|l(M-КZlQpݝ/}Cר5j y^ޣ4Uנzؗ4,-H34UJ=֠y5hQCF' \^UWO#X`g_י}+UR)•¸}bW,5~oઊp_uWUJ%GzpeD,Hz|>l8|pFӪ|Y;|q$[{WVzkbHgǓ"*E/ b{F|gxX-V aCցy|FG+0;pJ#Ů@L^<<;XxL!L1] ydFGa@*n%dF]ͷ+^ LW'ӳ ̐wS2w^>IW~r#}GT#ppV bO\J}Mlt:&CE |x1hR(+ ᓤ~`_oHWz%潮`?#ajL(WӷGgg׍<ߞ]M_,&sV{'ŷQ7۝rFhSN_qg n2^۳m쒿뽡l{c_7:xcA&9DMֈ QD9U*RhxR.*e OVL$lHToL{ƴ]/.}J *+nlJF/T' I0 1xHbA'@3qѢ > j{_ӶZ xC|q~OSqpRHn]qdGÎtr%5;R=ڽ?Ѐb@3vJtJ[ɑT J2L}tDU ώ aŕdI #(zkuTm;ɚ;fcJrI&%򎪁ie!EE0 0ZIQ5&GĹ1&֤h~C}WlcٵYl4S/ӊܣGSLѦs(C"t]ԮʄYEz^™2%q619{BNj;VG0\{YoGxv߰ELO³}١n2ۡ %ݛ־i蘒y鲄A &(B\y>{{/z'%٘ s\9津%;`ssH=۹r-e xQȤY¦Z:)MœF{N}jǽ+9rRuYr˸üR†ԣ|ǧuH?W<''}}pp<=wonzwdCF^6Ը5 2O@GW8z'ܪbsҕfe ϯ+u f=]9Pv-`+3U͇mc#Q=4_>yw̧(ͨwqr4Wiw_'Mְlu"tt O#EB~#ĐX>;& %iKY/[g"$0*TjQ6cNT0fL͕e6r9SJs uHB KUDJ"H*&ouA+ĹgT魐=åaoEڮԼ}Ӗ u>e yCb%'`-2^3"~\jFCye (>fSPFR&eHF26[)nCo !glYf5QEe87qAzmN^EŠ~sw\:Ӆjo؟yĄ RKR0%!UME6jPv42dEHT]1MѢ+)D]񶈩ԦrĈƥJrj'* R8HnUaa3 (ɖXF,.%v2 ;ܳ2)Nӏ8]?tt||hz#0R"TZh3Y{OZ'rTSBؤ7BjuQRBOo'>%"ƈL;NҊ9nۢjjD쭩"F$38S-JPjpF1PXXG123C EGcM"ZFđ1PH,TgGs7N"P~l1"GDM㕐>iAD m`jYC :oqVN^U5ED9{fZb3@6[tٓV495Ff#p0nn\CZl%i0∋i|11 ZQ_RQs >Y[+Az}#Zj'qq8s"M'%t>Պ2BPĺrMR4ĚluA m*K๫h6$t&ĶbdY0h$̩$F! X,db+E& Nn&ΧP{^=yc1lJ䩀RE>,ؒ(KEJhmaY2Xva,c/yC!?sTt  îdEue.iuѾcۅ^+ d˂ڶ-8lKt~>m3>##xPo:7xů=XU9v~iCn͕VpcTU"-~>>DX*nP/oPLWb6nN#z5otNdo󘝋7o VH6Mvqd:~"^5rv,k =v,A8(5R4-e+#A !fHzAQvm ,iFh]3s!n>vw]p1up[3У$*,\Z P|v*jmVʱk e[ސrmG/M˓$eDB%"޵w$SDDѺpO3q< Ӌijynok_żާ5o+F ywy&gIv@ͤNuk52 ^|SMHdS-6"!ÞbM459B-K~-<4h]/x̿r1NRbr@sErdD1O\JH+^_rY8I=l<ф4[ΑDǫ*Xi`^FL´־?W[t˰Lʓ,F]odAZ:3lm{-]&㬒tS+[oWidȮg@n4rs]仚U索N=Iw] (Dx`8]|Y\bP|ejS`c>,aQ%|>(~&\MH=sM)jp^it fTg+ -joˌrNqO{mWW!WWkK_s[r dֻ˝ n|Zd/렘i?RA kZ$O?)eL߅ <b&?i*`Mfgc?x[mjwޕt\V_,֖\i~H\wG`rU4 z|]Eo~40w|`'^{~KpT ѳՒ,ufR7Ѣgc.*YpQ>Jyd~ Jg/Pi,|k2~ cZʬV+HzҳJ-ٺrVB 81g15jY ILR.UḒ>׀EB.ϒܑ=MQ B,}rjty헻rS].7Ȓ9i#NsIf6I:E90egtn/B/ O|t2~H6=ûhfx$;S|KP,}'I_31{& ޝXCQm#\u6.6yF0I 7![&7\6uvn1IbUN6ϑ\Qcc;,7tk^&"09&]M5Pg{ҽ_C2frK ,q-bacBI J G3%O89,ɕS \9;g'<}c1ibH[8% QuF/VbF^u"xJxc˫wJǸe↢TҠaD!,"?VڌOGfr@f ?Ǒ"<^Z\O~xa->thv=ۉ<8%qq:butR,@7.h&@GZ%0X=vAY|ww}}/`;v{i 8*xxƢj2AGθ:Ƞ%c|*U9J4_b .o8[ztWI@Kݵ8MyZAؤJ@%̧7JO/\v_Vy] |e7G?A\;?O`ٿ<0rc?ZKmaTl.J@Y-YG_#g?TƾT*wj AHFL)ZW+׭ P?$/Y eWiu`C,eC̶ zT~yi$Ϫuɛl8n:h^TSMb|1@˟_g:/b/7w.=M2r]9&@.ϲx 3꼦 s,Ϧ؁ Ay$"kY")a%'xilf{` }z>n7vG_X>HT![R7yb"7#0h;P sZD)3ln%9Y{⯲㣧:{Ⱥ?LbVaAƔIZl`B2Dp.KG1Rm|9&r6ȹQ- "erbrX+"D4vmR+Hv:sBRi>Ef% {@Fj.1'`6 Ks)Y8WLZ+8vzC塹0}8.L;ҎOzg|7jû%ϡB`Xi`ZP䥶Kw)V$e2)C:AhIM)0qZFeS@^#:pWR|9r(TҖȲ.n oNE{O467ܗjy-Z4 tMnuuQztx0V %桄HKHJGO$"tub|ƶuz4:& aQiPZKXB4X*AwX20Y0>b݆Stt!401"Zg,#הL}ÛF#$bX[vW};EUĦ-Kfp[pR)OMH6B萕,2Q ۅ˭MlK4n=1IYq-HiPT[ee+R \#o]4#ő~pfÑL DX_ga1Ya@d>UZƍA=SM MIisXl yȣd7NaݧH@+PQC"r4EGg7QLݑ)paGGQ.kUÀK~BPEJ wKr%LНYA.()X62RԳ3TWJx~1=^}!%ђanDjAMƏBrp*6 -`8-$n#LdTΪɼK3/Uɫb~^?x c)\q ΫjnW $j/'wBm=nHk7s[760rI)i'GDW}/5#J^A6V,^dIs`'28e7Y#M* 9V6 ZךF?Ɔ8s9~^~z?=Dӷ^'ypA$ CV{SKwӕmԲn)˷JI㋱~Xtⲉ*P (Um@!c+usXdneB]MB3!T廨`~%r-x_V]R(MCaoѧ]O"JEq=IX&=)`aDH3惰g%t &xB zcᒗ!4)JӈB)edIBSOUר (2N':!31BFayӝD7wUӸ`VkBqQ)(㘲Owణx&O;rAĒ\yE;{/$wI\v=9DKfw)5P$ B)G&sǸ/P;pw :&'gMZ"OwU|6>pJ>z l]] >zDC̋O$Pdl΢ v.4^ ^xDKQ -GD ($G iTit4Jl"1[x)C0!B*Caj3ւ&ye4zMVHKD3r:pC#"h)H o2'.b w)7hFA7NԹ&l/({C;${(]ՒM$tY}츘fJ}fais**y'=]LfƇSbRRDs zPάӹ(扵TRZޅЋɇkt>B;)IVO7"^jTQ $O<egJw@CH 'yifaQņ 7Lna)6TDw=IUP'n;JaR"a=u਄-׺WQD=ۓZm}νDcz _Mgyqo.~6_2@ n y|7Ry6zM_-^YS1 "גVL`閧$$fsBȀ]҆˜$Kv:nfMﭖ*wiyz~M&kkJ7o~ ɕXj[ 眬+^ٲdfMm=hslr ~mNj-mOdiCPg^y.s]g ;0q# 4Ӝ9#sc9Q(47'ONpMnLn;؞Ln;Ѳ2h<.FLka D%HDaKB -=|GmD +7‭iU"B{gqoR0#@pEC[gl>ѭ!E)˻_.熛Jzꓮ8P Sb6rE/mѮ/Crx}H  ɒkq.@gzdkmYZrL'YC+La*W_Ky4ث!(WCQgҌ"%J!0v=fIƽ^>T`Tk08\2ϭi^wSௌ~?trh֤(;.wH\]ki}↗t8MiЪ`Fя[̼JZl)tjv~m Rf#Aj"ouV{7t nʢ]&d2bv K*>c^f%BIx跒)ɼTYE~Am%8N>"hzg0IuC!(2Hv2L=Q;G?x>u3_4_x$uOV]u%>e61x>W 85ÒBF$G߉^s54r_.J͟gU4ck} I ˜xM?1`0X65rk軷υwuo~}GY >L|wC`tَfy`GeɭEF{6sPKZF'7J+z!HeG[ijJ Ti*-P@U#_S L1qū9W{TDV`)iXe1pe1,fŬUbVY*Ye1,fŬUbVY*Ymc,Ŭ.UbVY*Ye1,fŬ^mrW Jil̄Y{>q N^+bLt֧E{S,nUt[_gj;pgBH11'IcQZk=: O)`KEL -cK C ң ܓR%PмI0ezF0\{,wq^ *6ڵXlx},͖[z穣[VhYzv&MV5']Ȕ0R)BDraIC1GS1t ëII pkڞgqmt /  rr2ZrGmOlhD$G!i0>h=v< Unirƨ:XӣcQF99!;Y 9!DDJHH=L E"_RUmiyP*/SJ$OZpR0QT=y@L"q.:H9&`Gzq6P)!'W=xM)"HI`n^ZZ Jmg>(7L__F^4u:%aħըt9_)Dd<[)iCe0L]HP&_?.5k.-gl+ww)1m$O[iʺϤGrʢ q9;8Ird䒌Afvo/jf`=wYԑ&0i\ZPIQѢ4ԟg$ņ ]kHfD-qi}p*<mL!kpr6znԁ` A.v{Es M!J`N}l5m{㫫˓مepFŧ[ao<<=>7m@B6ͫqxFG`$]%u#:)YVi 46xb.ǣyCOn<=;rB@g_꼒]vՕk&IYO1R V>_Ya&d|0.߲cK祚tNGۃpz~BïӷG_}xtãf?Q \Pt"k'p6S?(zM.M#uU)? ~Qtti)nJ6Fq_!IaL жϾg;r?ք a@o^$P~xb.߸=(b kVjpN9I{EI wMBH2L sҴ}Zt`zr.ױ23d0yD' `+QR& d3IW;og>Ǯ5+k lN)nbewgvfʝQ;*]v2;B ݹYfif/|q`FKQ^@Bm/̳) yv` >f!('L3#s@&#IJ)kbJd޲^A_`Ff)9E팕zȓR 7!6L5>k\|0MN78(Q"ƠX&'xƑ<]4 mU\^$9?^W m="P}{ͳ|=3rtoGʵtyq& MϞ{,^}prБlPot)^m~pIAV''/k4g_ɪG\Q7@:YfvTvAHPλd/7W71IE+;#2  Ev]7qGd/{Ƅ-(ڑtkSxAvC1u†qv5mϞLU_fS!M> )IsdJ20sK*R^ AYmZz_~j YA+^|[ofZ=C#b2gK͔AreY8'@z!7mB @x::8:4~{As,dp}zy2sL> өXO_9{ݳ+PsIQS*h4Z#QFgyFt΄qhd@94VYwd{_$iZ4,'i)愘(귩[G߽Ӿyy4up){%ƭA7@H/!ݘ?9i;O1~n٤:,$!XRTU@xf[v+^4 ,.r|-EZ=]T{-rO#]*i4C~?e-),o@'׀Ԫ if+%S}>I1{ďcG#Y 2A m9N)=(4y\&pF('0DCƈ~q>5m*mG*(LGRBh,jC&ъь]BR~c*M=4 Ie寘<$4HJ"^Eo wLe_<jrL&^-Fc޳ALO fl.EIQs-\&#շeM 5UziVpsw[=ga…DnWwLonH>2 x?}qzxtlB!R22+Nr_N$d(UbK`{!E1Rv!!a1YM`u)`gݛ8[l?%s_voڱ=[mQvEF#yk"9y*JFkilbF qLg*~}q@2!CE ٚX" c*$g$9Y :z=lyl1b{ӏm-"jEO6JAYX!Xr3/ '>eNtٸ^-ENEmֆhI3@H:indr!2l{gs'ړ]4OkiɶvQljP61""e"  Q뒸֒+]ioI+DC%y h,vn1Y;h) I5)mbR*(Ve&+Y2+eDd&5|}b_>1n {^o#U=N&l 7gNpa5gF}y?.2NtR;p{Un h"T &RHپb@~vֻ5^͜l~CY.sICVĩv!ڄBdHfEy:QgMNC.-/K\ڞ9nR/[EyvTh#a dX֡(#knuŀe[Ǐ3Iӹ3hξOj296vPa€λWa$۹/}orQ9E%w**g@h?zK=j;,S=/2Vdn|*禇"s gOIGHhLL$\@"+JA:SCބiBԜꪌl!{"0too*lT5}< Apa,iFeVoX4 j0QDTk"Z 7,N`J{w6'/~\g}[{Pt!+1|ҞOpQ^"?U8u5օŝe8kdelHUNkS @ GH*݉U;W G{+(4dslqô{)YR+mw 7A:rb{7翙-d؊`*@L WI,W WIo㜽DetF_#CDj~rt<)Ծl.%AIG%5QaM` Jj@IiG5cR8z9i(_l8[}>]V w_l6-%kَ@aCp[kRxieAgD O,INH$eZȓTܥy.ߏYa #d%3g`T 503),uBb% z11t!.>1olȯ%yHy݆:;վ^lˆLTWN<pʹqPma>On;ՎY#I5@<tv}H=>汁)99ʉ#-% xΏZъhNzya,[?< 7fWD)J5zIptE8Q219eV}gְ>, ha>WBpSZ E]kviH^n=^S)^rgh.rQH{"qAqi'rѨyh=TZt7H٩Al>~'r|g>x c;q^>v@45t2D1^ٽ\ G /CLf|=:Say.ΟܱuᔘARQ38 u=-{UE1xvF/7SɱlEa|蔉df[`\Fnjkn1,~Ogݥrf9Rd9s|gـjzU_dǭZGj$`z?Y'n<\Sdt&e$ſ:tf]fG׵zzuoO Wۍ3~f1N]B{OşZqwKj=u$PkR+~ψ2@H /r?2ɻ1}}@up\8sihP6%$0j\ Ϲvk \*=^'gOUrv67vi`EN]9; {UGSrC{}`6:^3mIQm]W\ߝUԁ?-(8i:Ȥs墭g^{Mӻ.ua~[,-x6%.zUF_|UǚG5W>FX/H77|1?Zh}7>J8F^v[Ǜ8oMofQCßzT|cQv+]K /iH p׎TXS"xH/_ijaaS0q% Ex(RSJF*'F2-9IG<Qј` })quvqw78Hrr[#"A q x R)gLTPKuvRiX tt8JD]o2[pu6? ifӅʴz&h] V*X$!qʴPU1}n',GQ5rnLBI *YԭEHsW$hЎ4ʱ<H_JQQe$pP8pFAYm\D"FPף)c38T1Wl H.Qf7Um]^^=#Q7r qqRRp;z`rac, w\z5zyC c0"L3GNx{25!񣀣Ql~#*1LS3y-'fqQRON~]mNr52X8 }4ćo8A+7d|Ţ}o7|)¥PJϿJmNFOWW9YNf m+]'Z)!TRZM2T*"ҤXJs0$+Hn@#7dPܐJw!lPn`6CWnEW 4~?]eFtutũ= 0HW.R*w(5 tutRU]!`u1t U ]eZ%'b#+@*(0)2\.KV+T|#+lWUQ7g tut8~U,ˑ.hEM4]!]iKrd@ G 2ZaNW @MAtr CWVޟ fk*@Wӕy7k`Mw.㇥vhڡ=L 2]Slτ9qKZ[D{/Y-K{>ί/m(S78A ?|Og^38q\vj1+-@ Lu>{c!{BhZNU4 w["U` )Am &$xgW]3Vfl3vY^"hai) <1T <0l~yi\d9|UIPyTS&)HC2CGдCkd59D4#H B,9]U+D4*dz#+.Q]!`Ce1trQ ]etQA:F|H +0V ]e@KVUFJIWڡtQJ2gFCWW*2Z{ QJ2(HWu}Œ+k1tJ] ]eFj#+S B\cpM1+D >v~(A tute3 +8)*ÕDdNWR9k+A9},V8-4r@I!M% })в@еw Y=U`%a冖pa2jr$-Q~Yeyܰh; z)hZ]!`aX1tp%UF tQJ6ӂR]!``p|DR}+D)kLI *UB@tQA:Fd+c +0(R ]eeR{UF@WGHW(yAt ԓ늮2\U2h*+EHIW)tQvut $BRcjpu1 Urpd8J2D +Cr@L1ts2*w(}櫧R O<vpD[@v(EϤ+ڂ@W;z߿-jDo=U+BacxI~K /I*_l׻2 4Vh7{A>Mj9elye >pxק'>gwd(}J1rd4mz{h'e4ȢI_Ж'26?: L'PcL^퟿Kd7DWf> ѱP!]9D+&ތrb(c+ V97DWp܎po7NWՓϑH-+/<~Έ&Zo&ʠt*D%uQf pٌh]8v((]=CW7DWx;j 1m:zu%+ϑbla0\hS:vJ󡫔Ȗ620 ]MO}s>dҪѕ{Ի_ Zhg㉡Svzac%.<%;8>'Xa(ÑsдS~Rwo{df\}i6t?cA%-E-OIOꀙ&=Mw 9y';;? r3#cL._׷zQ}c`zȾK>?gr'Ec^.y@ .fhwqa {w]Xxww*xnb>Qw/9=v:=vq=jO^cU^y9??7q[O}{:ۻ?>,h@{  /uxBbl>?O<4̮d~qz#7aE \A|yQ~ft:[NKrf笏SeO %qpt*.P ~)Tʎe  m+Ϯ_\:_O7~00Q.`GjLZGf1%elSBwL>YMj7]_I;4q*BB9qyɥ4[ OL ~b(. 6ypJ 5>LCA@ B%7=6SH y3c@"ZND+Q6X"%Z̒+b21w8d(9| RlXD.%LJrx[HZڊuA7DF; we^w܍)aM=ZzB׮iB5Cc_CظR@1 Vcl4 w၊&3')*@~ijcPM:͖J*0!?ABvL4Vw^#ʨiHf 8['J B|F݀F|2dxw?d%Jz#5SAd*هX W|"RYr!=7!\U28ؖkC'h%5SlR4#HT5ToШc{&D~`taZoO;α'_sXhhM?cw2Kh`M(" 9KFcqкIȆ"%x,rjuMb ٚaB7u1IK VnYH@"XXudаh!Ѡs:K+4з3YP4R![=`5g=PrܣE{7uiIc̠ʩSa9_%` &~ﯮnwgsw$~TpUz|nԽ m3bka%a1=C /- *}vld.qHC1;ZmU&:kx]@Oe^`xUuV24MTL輶b>X,.-e|OyLP$ );n][!q:Cp_} ,\d!:T?5FjΛV(7je^EbP; &dm?n]dEMK8r ؂7OBfȈh2ǢA]1ȋU}L!|KRaGqH Hy0 j `QҪ]ۚ$g8Z.fcvl(  z=jA ꫴhɯ}F2z nhIFaσ_D錰DIVJsNXΪ$12f(- A5qw pDmpYX8Wa1&ǂrNf9jMhG|M/ytՑpiFv@ pf5OMڀJ5祷f:x$qc(2ߪvC6Hw}>EGKp03/ ֛ }e`\>|_^t:onm0IsSs`0u/懷.9hl\IP6M t@(xljhu\֚c";j9τ5@a[7#󌑿^ @ߴH#(s8nX0=|˩4ts ]a7VhoKuaŭ(jUt*eX`*0 d[SA = >hJyHvidP=XA|⟿9?}Ţ)1|rp=,9-ՓQ*aZ 0p\ k- >jTyH]ORuM` A౮Mic1 <4-\-ڼVzPi@Hk&](LÁ8 JVb  tJZ}~sPqac?j|-èq(b I.g5X bd=pe_?-Д nFEI5`=]GCXm(u qLZw Näeca՚-UetiL"j9RЬapn V3V Kk>]~FWW*[ F()`Trknq^rqq}f/|j\wvrb,g=~|1;mvPoC< C"'f|/q7fE/ANOn7i|~>wt=9wvg>n0!=_k5^ӗ/ t _Vx\W>r?˳~'o_nx6Z_pc}{}-9y`n&,ɩSjܖwܧS݋0>A_@d*J04f@LhtI28M=$$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I M=$l@ 4ƴ$CO#M=$MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&oe[J=WsO =&cHDMiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&Qӭn~x;Σo7n8{wR/޿K[+~3%{%LFK!Ѥ:Y*]MA6CWnL(;]M*]=CrѳR,akmE.{bon lEb-Di5RΌ-ɒ<5ıg(;\%0j?q~^OZ#jg{;]8GŽ&5\-K۵*gis;E[FV5(]zq@7__]~ӛUR9Ω?33z_ptAiK`O^oX{6ʊR{ɽ}.xBs,g UHs=7aBo_-ڇ ,xg*hW*~*II ^ \IpyAXUUVc+IBW/UXUWw&$]HҔ!ѥAO\g0IԛWIJu?\!*' Wy{^*L~R6/pOpup3Ƿ3J|qN6n/٣be1i̷-GŴ[MpՅE%,Nƃ1<{ kP^4NE|,F0QV LqXt{Tͽ N[50YEH(򙖜eN)bxЄNnvGts<Y,f!Yn5tz/+zT/KvOfimw1θo<Ɋз/.Щo/RUjžyq*oB)nY/.6ʋʜ98oaǿ&С __UJQ#_*ݻs}.R247%)3rYd [hl?T7K}'dWDRKJgrVG3 h(zӛ,/V.܏G|Zo\aMbwfyg՛ef"aF܍1T)b+t@HmOjRz\Tkn>0/R%rH?/}P40`lRԋ|6w7ah6^>%>݄7tk#V? \|Y|뫾5>O!U[*⪞5˪4`WYНj20bT6U^S fS*g3YdrL΁4e/Wkw87(k7~OVW>f:&Ec:}_+jh?*ThtT}^R?Qv2]"<}Q\oJ)nlCmR yziDgz `Ʉyꃑsڢ>sK b׬Z*~6 K3)X8SLZ+8vmLz6R'#6scS<$K -ўe3?ma:D" m5 *DM[ӂ"/\zE#YeUuhIM)pZFeSȧ ^#Z&p0[E#o@vg&IwhjXk3ߟlPHvzCY'z6]o薕 "u+8] qD<i Bp8rx"7Y=`́)271s7Wy-E9IEyEafXT$FD00OFda(w< {PO5d4`EStt!4(1"Э35eSH:P?-Ӵa#whM&Kfx[pR)OMH3m)ddtJIVO݁is[D`{]Ӹ'Zf$Z) ,+kuk'A{1 ˽rj|ឝb 4+fô7TF&XX>Di0c_GTM9RFaxOgnTP8_TmKm$jOiL~ycEB-ChSIzsI/ib(hԊɒeary%7XAiF;H*|?Ᏻ+L9>ýQ5}Lqʾ!S:kY?^\|w|{ޤo__w?\~ uuWox V/R&#%/@= ˟6OfneR/^SNŧ%OL aH{~to/,64AV7U BPmVyC-W/!D8l*7k%G 5}>҃1]$~$a$j`aD0BS ,w 1Ddc40=_iAՃ^wIu`a599D5J;:opަhBH"qB: ,8PPHq)*Ѝ)#ӺN+:EƏTߍ5+s h*~V]1q;(`՝:(gcN2x $4wGQvZTs)|g :N(&x}Y_\}%BY،E20\ )! x'zDR9SIUCl|=1>\(ԟfY3Q̒jG\t8"磤X_ z"Mғx (@ȜO'wu9|(} *} 7fZC4 Vq`U 'JWna)ϥ(9P3Rey'Tglu&CO*7;Nlp+ϫ^}^n"~ez&mha7gfrs ^2\b W^ި^B֎Y7&,Vъ r$l:Apmގ 3H]?Ϊ۳̋2]y`Pno2CHAʬݳּr^&~8rnܮhK~=uEw"Yz2 ]V9{zKdB5՟'5f}&[MU&i6}F4֜9ʹ=Lt~XQՙt~$~?H;gL Z_(ʲ* r[8}oaI(4ӌ9#3c8Q(4B+#_!ǶBpP= A< 2h<.FVÖ J⑈ %– *Zzht4D{S(sVASƽuH ÌOt|b ܉n[45Ms WW݆W{իf2'1'kI{@+dkl`a 5E8c"Q3H[cf 8:*GPF.cY"ZI/d0:TԒmHH i;)e+ #K"U?\wD>j?*6J!0y!V$ .XdaCE^V50$5Ia# j3' qfaHpR >Hjk<#` csic< 0%pvۥ@sI[F*2 %Z* @OpiL-&x#32x5Pf"Tؚ8#c{\ٻ6$W8pLc>K"LjR=bQ$*)2 [*V""/3jf oP?8G/#V_Wb f//ƾf EHT˖9d WI:drΒ0y#QxmbEݜ]+30Q0齐MKs!uYKt)`lZfV1Ao A׽A#z%8YKoX.flĻȹ I6 hk0`lD2ʒxۛ 7yS|=4B(Em\g!%Q%A@1dŔȠcUa_`E3èqSE%-K dѮb6 _ȗA$SB6\>F9?bGqrjrwG2kpvJƘMf:aRYGfe؂J 2Zp,-JKOEK30H:2',)(2AQI(>T&=|KC@q}ub6kPQ䅷8G6)2+DH 9 hXRmWb40ؐlxDQ訄h < PAEK/3hce %rϼk[>؇_ _i9[mK }QC0ٸ(ur+R 7mj|x JFr1F߳e.yгWM!!mkB62hU3nc)Ȩ.:$4u Qr(<0:ȳ@sQ3[2@V<譯]8ڬ|RpQ mЅ nVK/whwž^KarE£;j;9Бx)Gm>/W4@m,+KL)95+#.`7@E BYfFx*; :!k/ R"Q BPRh3#"0 08",K?o?7beD6\9[oesKcz2|!y]}DEpehLIFLf4tQ>irUʌ[x;NMmZSKxVGgRr#y%~8:oCyWҡH;)-;hicۮYC]ɫ*(לOFi@`ʻ& U oe.woɛOz;*>u[(܃IQΦ U kLG$tD᲌ZIli'}x7r4e]~:MX,O٢8r /cj~Ln~}ۑ*}~nwi2+߸qO+"XgNl~NcY77)yt[NkUyvöME1Ï)-T5MqQ߉JA>4޲Pj]TB۳,[( cŀz_ 3'WIYP]&|d^24<'aͬgJDl ǔz)WWtLNHy·q]K`>BL Y =+KlɄ5$ M,h/2SXNZƴ6s6b$+6`϶vj<@2S>R3MÌ, O)*4 Zg4 7!e}z׹O)JΣϞ1@<-D8HQ3Nj3"L#ʂYCi^d-Zw8x2ݮqa挐p!q'sinBC=|jHNEGto5RJ ܔ<13}כt7Z{؃,;OځQw [76>-5ވν;َH땐&ŋǸukrqr|ԑ{ #6LKQ5*YycǿmעUD,TW 3YҲs.gIxK8yR9 CKAf@`2yCJz7GN_{];Ff8JzɌSg{Dǘ"1)Ggfh0DLPs duK7;2NsrLclr*E=l9!7FD2.\K-^xX\=so־d0KE%UQr'Dʦtd9>Y,/Nzz]yޟȝ܇1eQlʚ@~YH%1 P%m#y \0+zr+zt?w+{7o*1cXFtHudƘ=dw.x`gPV_C;o&z(a2O^ċwݽhV)]s7.C2ObRd`| y:(B6VOVэ?ݍ\=Lxzdҳ=տ/?t6M_3|: (/ĎWi|G7DZbb6 _usaR_}c>7tgmp_?OJu_?ndÏWt/oo'ufDH>N~}RJƔI>Y'QΑ^+nx-R|.ڽkL/gA9@z%w[x!9IŔ&9vH}2O׏}­w',wH>l3wms[~#\XoH;f| 1asx=]׾>ik6#uߵ+&ɣ\nmgJV"?HW/eLd iU䴻ݿ<"WIom:d'HOM7//Wi2hVs:~3\eڸ>ŵCxzO O]~od_?(/>q׿};ŁBoO1çfj=f73_ F8ލc// O%NX$Ak+WxfH5%yՁe*_XXn#vNW5ؙ4T.mt3@UUCnFow2mjbW- q-;uQh|ƺŸ.v/58=<&n+c:d]mE 4pDe3xVYzgظL^g|5x {)Vac;p<.IYFa43Xv%)B h8!4tqy>tc f5wYW[Djzg=ĩ~[k|>3SVIzdRބƪ kl$7s1h ^#vJ{R?W8XMNsKr0:qM|Cr3\%.QHC2,zY(o)X/*2+4Y{XU8&?Xd$Zjy'|bhAYTvCo>vcy7zw7+7G nu}|<[~='Xgle9~fq5/{'~u_ͪᱷی4yϰ4fWњG,[3seFB`/%]\tE bQH꠮<#]!0FW+=](:+k9Vkp+u2v]wIWԕ3<0ϧ1V."ZcQBUuM<~ltEMtEŮ+ Vũ=X-]f+7ҧ+0m_ZWZ=7r^ ,p ܨm,Sx6 P sb_jalWs%f̔>q"U_z/4X,Oonk.WIPQl^\7uVbR_M/ʻ}_M诧oq6^5O;):SBgYU\ĤTlc^e_./ՔuZ YUZO ̏_EQ`+tR^Q>M)Nǽ qC;gx7 h<#vJ7qMqöY/i6"gIWDB캒NT+d+=v'r;Cǂ ) ؤJ;ښ{>h]WDiBUu^{ɩ1Hкڗy@+U7+]`nOcp5h]WDxgѕ zHWZ6OG]!IW]ԕ :Nr+EWD@']uPW>XV6"FWHeQd b'']!1|. 9Cr(k+j@G6DZd.HwIdۧi=d=m|>i x/˲{3WKTma6:+>\L狳i95c1, !_.'ҥk.X5lww֧AL ڡrmf q 2/%s ce`jJS.[R.#ۛfaAQ|*LfLXpa,r "<qN&P8B+`=hGގV۸[rKJ ע%RKn۬-b+%|tRX."clG۪qIWϢ+-E^`)d+ltE&(J꠮@tK5ֹuEA']uPWF:o#]!pPװ"ZcQ좮yFEAZEWDo Z0IW=#]]!HEWD (]z2E]y4t ]'u>v]eIWU N{FW$+ڤ+0}_h +6)ujE[jG"OʷЕO:|R2p"'"F/M4I3M|5eљ4$H 4UQ A*i h=n JgSA@kpf]WDdUu!銀j\\/=y(']uPW`X:FWh! Q6.Yte ͩ#f+UEWDkҥ誋%+o׳ѕ5ARY #]plt^B#2ͅEW]uEϓAh]WDiLUus=2x1D/Lg)~Irx?^O*trrS׏˗.?sk0VePa!J/{?^vU~?ysp~V]g?44?ٗը=?2R hgAAi7Th_u/.١z<;f7OekӸק`^DnH|ԙ׿*\hBP6h>L2{/9f 7YXVfw뗪̻*CX!BUhR+WGN<%y)8?[P#Nk]UxaTuU寇2go"5i#Mheɿx19̿gNaŊ^V%8VU֥_f'*g+Z-هrP {N#d.Tg5WOW!~&[+͡&&|T;oZ.nn'c*;/.j_[wg&Yzds_jrɉ\*07&RVj2Ee{6bl:˟=\.{0{FKE6JyзR|r&PioqMDb J QYUr}S%qx2k86܋Ǹ깃nR;@ucy]՝a6.a>0ٷ?ׄ'j*rm73ᾶƍ?/(^^3b9-u}躊Xof!=~WAyy,F%FQ7+YdUTU%/fھ۶{6T6FhNJk"M2E|QFtZlS|~ʰB`}аB|o/S{ɮJudTPr$sm*IS%+y_ƒr`uJq:5܊wU1r*ƭcTe5{eCo(F@PaևТ]Ng4ie5."YnaS@=Vb>2 LJQ~*w[rײ nkPbM}ÖG*6]?)ve+jI\>Ygr.eUm|UJWEӏu^r4QiROHZ Ei΁Uz{5~aF@YI^NjaN M'Ogtzbw1&t<@̱UłN:꽹UYL|0\r]d~!Tv^p x/ر)|߻S$o~뻧.zMNC(WMvӨ#؊։#m{юRF63Dax9FxAׯe^qyV u槚1#J(BPi#"pC ޡDiurh3Ϻ*ᮺ"`GWkc;ZecQK꠮Ar4ƨ±{ vFƮ+>骃oF"`ǧCp+JĮ+ԩC2>ũ1H!JhbQNc+xtEs+tA]`t[k"\ghCR*ڻ+VtE n`3 iAuE&] ]!d=Bcj ;5:Ze ֔SW;Q5ؤ飝3!OB0.քw~5sNkj @e_ӋO-*)&IXLjA9/^0K)erls/(c,S܀RJ6quִ`cҊ7t0nP"Cu]u\tE!z]!%HH꠮41KçWp-^u>v]!)ꢮ@Z$#]!pױ@]WDeUue2[>AltRE$ʨ&&]=95 J!cQFi$]uGWNac3ǧp+P7RW{u嵔Sc6"\f dS_N꠮9B`-p+=:)k3 J6z ?Z+sl]GЊVel]dYom&jȄƝ(7F9|xzIt[#j m8ub@8tm!M W\d WLƤLfd^2j!MKpҒCZe-94!:ؒS@1Ϡi<]!>v]%Jf+ʱ+=(C'?٫aϾtEMchxkJ:+c8HWlB`+ĵqkT;ʠ:+k1]!|hbQH꠮8EWD ]1XQ']ԕw$!lf+ĵ"pB2EWn~XI>"\-hm}WDmՋѕj/6mТE& ; ZÃ3hZS[F" ZTáU iv(82LmDn+ӎ[;JYܠZ * fJ HW ױcQ:tA]a6ms[+ltE6uZ&]uQW ].h."Z}tF  Ak6B\k$](]ꢮL ]e+ "v]!!骃rBjǩ1`p+=Tu$]uQW^8 d@t h5Į+4>骃 "7t]\t. DRѕnmxZw=#o׎i C;JYcPЕN:-EHO/inkt# ZZ% 㲧n8Ji#{*?ZW٠M&cR&s٠:!魀AY6-9ť%GޒCJctju%x!p=F!']uPWZ8銀p+@!"JIW(OW]\tt(Jꢮ-9 +BJ6BZRw ]zN3l\]K)-y}:L(hcqI ^}߯ vRoXMs,0i7jyOQT?{k+{T]:5CLbQ &u"WH8ߪSLܬtes:X }^%ms5~X:WqY 8jpfUbou2N|}usջVnsfXSqgdG,뜪>͛q]uZy6o6+]#T seyYpPە~Ҽ;"o!1uG^`~vU||gHP|T~|=^/rĦs~Z(cO,Z+ǥ^ #Gj bPʁrhBڡ,4[;:o~>/VH ìJ-Yl7 _wmuW6z2R_ <$z }Q{ f,!k$cc.EnVwU553aXt\JRQwW#гn*\}@.N-iF;skC>2Ь1{)ԩXRU)Æ}ʹRX5uK徿kdk6dd߯4n]|몢 TĚci阓I]efj`̉0!f-Š D[KBMP![X%kbDMԴ0j]^u9E"gqh1] ޽;BRR̥fc #U"YWvkVI۶J!C*i/N{ų 9 9S,Y>O͹9 A]UW-wrZ<ZIIbU1HGQYg$cvSt-jI:5K)%=o| 46Z\.`5 j%}IX(prs^QUbKT)9$rM4tOVع)Y0fٻ2^U,X XtfݠtGjOMÂw_KͺoG0L#mT E Us`rI3zSؽؠtxi氮K68E @R*CVeWT b1 ±&6VW6b#$l0ձ;udz|KxٶP[xܝXfa=g5 C*T@ %ݻV(V1*RO9\b ~Is6Ajcƚ #8&+ βsI:. d> AAQ)tWYeLBx,8Ih&` .mJ+Z;.J͐jPo]\?@ 6[@C$X !a@YQPќ=4vҮSwD\*:χP9ƒ= qn0L AC\'Q ` 5PPgL .Q̓e|\ut&v/ *A l*3RPhq ls4eYp5(O,)RX6[BR!]V ֢1EUDI)bu+եbrTzKV52+ e͒`d5.2Db۽B6ф=z+p*КeM2y_(r:_]ŌTUqŬSDs|1&PTؼ*Ca;i #Ns8L|a*'gv:˚Sz<VݺZ=^ L}6=Dvxs:p<(Oҗm:J`#eIW=$ B+uXF SayCs ~,{/Z,,tF]8G) >@&=:)eRȆZCN-zOy L}l$kuk˘x4 oE,TG7`ZDb߱jdwPIhXu53%'ՓFlo1y(G' 3_ ö=Xupݰ)a!/[trM1Wts]`7"vhsu0.)Jjn :T,0$SbFf^# =g2\߲]oal *' XW TN1֮ >HfaF_"oU^ b^8P1xr#f$ac|\$X&W[QٔJXǨZznzgm͊ڸXv=ش=CɗML׉ d, *(R 9'еub _ vPӸƕA5^5pgY@ σU>.jUa2 k/zW&3 M F;`=Q8Q/= fVXOQpJ S\d@bb-i8sTS.\;.8 bC1+PDG!6S,])`jI-. vp h];y0B.8g6GA}ws7vg|SK~tr1ƺog3c.}dG폌>"*7֔:$>MaIo~IqwA1 bKH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ l@ pEI 1& 4ռhO >&%I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$MwvMI rISI [ K9&$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I II& a#) YwI9'P-I$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I I=$'GVţcv5>o_/0Z=8w)!,58zKaK[Mp) O>6Hp9~f)Z h:] $tG<"`yA˫:]S,t **g ՀMj8)\zz9Φ(tu7>2] nxdڠI] OtEBW^qf;ݺ3 ,ln_<¨٣\N#??~nyŹVwD ?2hNn~ㆥaPym-6x/Q IX_6-6(7wߝ_]ݼzl _i4rqU9Ige OOrn,?w/JXvFanBE֩tsV"0GZt@K놁 P7m<VCW@<] !]YӊjGv= p=J=yJ턮#] `3BQ+ ZZ~腮!]~+WCWlҫv5:~t5PFvʙhyMH+%BWO[7]5(֪G?y7BWmpORk9Q*o|)tAՀkWsj'oJ/t*7ףHk+W(?]W.=TD<W?qn(oݿ|tw+ҳ[ڟR?'m79Rl K0ni'^r[NUuV8\ߺﰗ-?'{Nna;OTT28u3tH&^<|Qvw?.7I򦕷>gpMy`Vbs")mObp$[q5x}jRtSLOEbV6:/+܆^X=3HSLS $nqn=#co뇐^>u򁛗tɸ=~ q?Ӿ؟^ߖn#{~\e;yD)ka _߾ߗs_I>_Ow]&~{~rr#>O^z*qQM_m1<:t] o+á%nn":A B,ZvfXS잧$m *5ժEHe]6&0E.^%MmVMcxfKF[a )f7- ^74ߎ7C{_`v[(NJf}.G1۴NDA(ހݗ[`|gnhYvy1tq:J%Z޸/61[F#AKU(MW, KuK+'ACf(bw8\TuMXaR;9|:2@1)Te}if#1B/ S-MG#:BIeIIVJ@ u.I19`<8Ⱦv8G w >G! d\T0)Ae-Η>I^Scкj5+dmY6`oߗzN~^RbͲ;]CqIzY_J5DW0] ;gv}y|ɓL]g OJb)z;2J0q5;{Whq`:aܝMgg83 AgR[ȬrdMw4!x9ߨ,\Yb9;C4 r V$Y_jMlx}~Z|6ţZ+<4}&J*X⊖\T9^64ok7Mcs›upͭY3Q]5s+jn$jn"{sp[خ7oRz^#vFp51h\(2e%wg{'Vu1]o5VnXUWuf ij ]8Ծ)+Q~Nө![Yo߾oo__~w۷\7xs״f`]$4nDO"=e>zaǫ|ӵeq*EG4ݛzX,J7UH˂Q#D9p]d[Bo!o[ZwaU# 8 nLǣ%}l m6&x$Nr}v & *܀A0^A%4Ar3j%\bʣ)̮aNDrjpJN7x:/܋3b3.4yd ef1th΁ e@(LRI&$}F/Y$mN+$3L?e3>-"-Q"HeTO/TL?rUŌ9cY|&xN^\Ļҧ(Zڇf:ek KЈF%]VQE! KFR%8SP\!ϕB.2KeNDC/`lex9-!Na4Yӗ J1J jB+I>lw #]ĮjwX8(.zoojtIZhy_i)rZNΧX9=^՛wHx)%_VJ+Y kgJYCCp|A瓨>~"0SͧO[Y,xj:%0 )]c~![,''\90[;mXC#A YİdTD~>~#?ΪZ'3B6:CL:%@s°4r7ήpBݝMc{΃Ds~8MM۽~El ~cO:x{p7Mg8,zNJeKtL[TP̚]^q$n>Xl;]u=ҺjZs4=́s@6eE-9UÝZ0?c5cxqQ$+dʢX>h#x05:.YyAjd@ d ]'InEfҏ/O;9;2 srqmdyms {A1-{XidTsbdN3eLEN1ul˖5rK+RYaT}*#}*0eάteH` )Ya]Fk@99Ȕ!)_ z&f"bN?8Õ).(d92d-^)HdZZq6NT$@hTv,`zdd ,Rg%zp)VIm_H6`Lg.FY%y&&e\iCљ$C3k$KtRXp Y6[f24^&x'K0̍q Zp#k?$F풱S\eN#KW0H&o@IO۪WyqY# h@0W^!zdCpe@kJB:ڀRddzLM_duh}m#*ʇWjvk1a ;w*|2yM-OX,כk׫]qEf1:91{t<|QM&o>@ǥv>rs8O g&#)!cK!{'T0Z2Z5`)dbݜJ`%yi5Y@v!l%x)&WlSYKt9hS-M{ *SMOv졳WFkthmN}]mⲝ'`s:[ן&g|NFl+Z42.Jk@ Һ>jB ,&Ȩ1fY.]ٳ䂠'21;/"hs4)j%3Tmd&5TUaa5 Q5p X/,\IT6,񆄜vٷY ӯ#6S\("EZI&urF%JK1K`&2EbDQVUĖVfi(FBRkFh2v̺,%p:Y02bW~I\ 1մcWQ[ =x[-Ek"!CYE =ҸČ1ϘUuc@2y(2šX" 49QHr@Au2X3n5q6_` "Vӏ]QVFD9  4^0$?"KY,]!@ *NKẻn0r1(Z:)em($$F&EDCʈXMY$=錫O{մdW\Tq\pq+^9O;*86/< I| a]Ժ4u %;9>p/xXM;;V(Ҿ]#D2cgSf{{EE$!sJ+q,DLgIO&g_+=Zw66sϺ\&*lDC~W|uv-aJ7Cs~1~kay|5BHJ!eɭ 39NЙ Hhe62]Q$5B&q1o1ti{]|%Pruz^FnrD'1;Qb(.V:0}\M`XCFWk}dVC"mBFf!,d-4Wh&$bM[@,6 O1GcRtfsJ *賭]XMH_9^(:s(J0z%a)‹sAL8JPcه U/G yDٓ"HCiЌyʙ"LW+` MKr>a9}k)ȎR׭]R淫lm۞6icN25ьx<)> W iq7D'jQC[f7膼ܔcfH3xQ3+]=x|osC Gܠv/fݳx|ѻ~k Cۋi=ia8!+F(voϏw-ua(l(LKQ5Ud%=*^ճglOǪ;RePC@,iZ,. C3&S2$Yyߴr;8 q{`]p5C=Clr~g{)6Ou {1&H x.EJc%#Z'! VP% Zpp\ezSd(s@}$p0}E\HxyZ}O`NT~4kYrKIm/ j8lw:ZlﶽF.*+lo٨fz3V\vjd2m!3Ifc=BhӰͷmvQm`iS~Fx>_޷}i^ý[{tۆ=8UPDɟq2Ci.IfM".li0 O p4+]LZc$C Oӏ` Q#X†0pY#sJ'HE)V(<7@A1PMt \޳cmɗy~SO6;?Xfj}AMlKl>{å"R:Q1 ?3#[J%]A3R],Ec;IByIA5X}5eo5}Qq(}B=)ٝ?BJR /ttM40. 1y >IIab)l$R1t> Ql6^ ?wb=]|0'|Nǧiv_ӛb;p*H(:MNo֯Y{Kׇ4|;]xӽLJ>8ܺ}7ݣ[/_&%ݫ~~A7~i/}*)L['<4g]%3Gjis7u7t2*@1;GzJqK~ m*tl3kL/HUN)g INEH$4f~&{Z$hAq5?Z+FsxxFgx&ጽw.4p1-"fa鼝K`nS -v+:3i>e°z4xz=®ƳMg9Sw{9)H4Hrk;ޔĭjG? Hw_&//ʜ2U7qbNljfNn'XO_2/git:og9]q޾5]eܼ> $ <>I{uO~g(a0R KR=Kŭro.ժ|^mbSOI't# G/_Gv/GVۀ"g#`p$ s9X%3 |m XF eO-gb]xR* a,+M6$ p!. X;km{C Y5q B/1c㐶Kx}}"}5^g݋<룬8  PNw0$P,3q",^#Ғ;v@pEkfH\ PJiFHJaWWZ[ɻ*񮊸`+;\)•1V+X }0pUU]i;\^!\Y p"=U{WEJW8' pઈkš{kH9xW/z)iUqzfTJtM{0=9>_; mOA#n@+ 7{gG"Hi yxΡE{*96F9gv'Alr=nJm骝,V)w.G(~uq7ox5~>V]_u ?<3u"<&B`2ͲCD9,hV?ʀp-)i+{%ylyCR7K{~ٹtqZh&֮Ly֣+wfh)yOVnFWuԌ2m $~ uZCvtq[UM\@.LmcfdC/ivB..uu8xf}LTfQ"k9[h7;\OT.&k'`Z|2\Vzrje@[On=9Ԑ"*㚵+iʔj >UΩ6+4]K3eMW+ L 2\cL]ʔeWkU QYCp׌2n;@+K[F⦫ U0mHWNlJ+rJ bjBnLUWڹw,/VUFw3emUmz]Z +v fܔZѕE|ʔaubHULNvq hu)y{2Nzt1旕͠A%+oW̗t3hJmZ#"` K?b(关2ڥ,Le$@ӶiU/o?);qɾ&auR~R$ yCt~f\Vo?3e\aE,8oFW7gژjUTtB]%KҐAsdW6HʔQ6]PW]jHWX fܥ'X2m7SרnUkFWךɮ@k.ԮLy"¦JLvE O/)Õ؊2mV:ڲ5JK5PLNp)p+ʴR}vJOjoIW[7Vtڰseau-ݻJ)qC㮀+ hU=/ iWtbէKE֝iz C; gW! Z0]RPlÑRT.:&.x&8~_ඍgZx`VɕRVwmh[r!"`ᥗ5)]^mt(WI*ȥ#ЪL7 ,]e\ v]eʰj ޙo)IpoFW6jʔHt$b|cR3ʸZhUbqtB]E/!]eg)UjEW֤v]evt$oZj Eڹwq`ogʃꛮ֣+ ^bKΚU}{U]j2Jٲ5ʂU^aḙjFUm%MWOi=VWs;Akhc~ (tոȝV. uU2ܰ i2U!ekz2uu{즫U}TZvRC*ꕷxWRn{%Y'5R.A57d܅yj=oerqV7t7+2Y+ʴ_z*SVvWf*D( ig^]L+R2MW+Ғ,K,}[UP@nu׌s-u3+Z3hW3F]RCX;ځ oYHϮ2MW+ԕ*?Er>]v:W]6 fʰj2kHW6I*f+&_Lub.t \s dȴz]eMW_Ī<^\U7^⅓(^ /]PnAL~`}T0=mø<,|Uޮ0lTt)J0.Fvh)Yq- nT%It}W7?Oz_W>T[& 8 }S&1 .24ׇxì&ڋ£`ݰܡb}?rޗ? won/zӽ}77 ?yptow7|-n>?Ν]?/b />D?:\p#hqiv'̱> _o?wG<",?y{ƅy.5y2&D&CdUˀUC3ʸɵ+~񉼏U j)8M $֌2n ~%u(㦫5*$aцtKKWW fT}vJ3F]qJZmb q[UMv]6]=PI[ !p3ʸ](cܲ5J-eWa،2nVtZX2tF]K!]8q;2H+ʴ[FΑtB]% !]C;O3nl&ʴwAmjtOzȸ{WE02\ꪌVUel%_+U}u)*.H</%@r|iA%z OTK J@$}l(o_yC]xBX}ސ)VU ޫO!]86+2Vti9ծL~ u;nHWXɮ2nj&mXj6\epV+Ѥ%]xpʬv(7]PWyrm3$]EҊ@ 7]QW!QKNvqJs*SҦ5JY[ |;U ]eZ>ʔj_,FmHW؎,z'܊@KjUjJ1zK ]({W% C6]pbՇK;xR@._Y1oulvQ5.?<ҳp*he4]F*MMztʷO 'N:ٗ+m'7) ^>XFfETR綝̚Tl5_b"kG(Lʾ hS{4$LB[f=/bK0kFWW\+ʴV@oZ:UPr*fgژjUTtB]K!]86+]eD늕MW+UT5҆tUƵf+T2֣+(Mu>5]e]ҜlZ~WR8ahOG#J1|sח׹0SC^ww^<ϟ? @}w=BP }@_rd_n^(7GJ1?wȾ/7fu@dν폟^@t׻n XTj_CgQ^?{ /r l`fAb>ħ.>}wwQ{W{q_ܖ�~'O8eO ˊT4⺀H>Pur:3p E si,|jΖ3+Б?"{9_\Cހ{;oo׻퀝o_t>Uγ6t!hDba"'KpD}$b@]`do޾鏿Cx#EohL߿|4e١7?r'~hp.,c^G540:SL11QϷg)1J8 !ntA>jGY(ϿBݑ;}@!#O쇅)Aqdf7" VvQ-zҞ"O2wr2rXIM"ԛ(MӈZ 82KycL:SGKKũR@n^GTbic9:"<9N@zdRqRNq"&pM"½bsD\̨{!D3r9pnB?y{dtL<&KwU&3'gvf4)2:Gweh#4=w}@4|gT pR ÄUC7γsg~D'!Qqv$S"BtM;R:pYGw8*d\w6kxNX'IQ@Bލ%#B=!N2N>&%/HF> 뉄Y! C0vvF!{7;v9rr.*"'x!u 8%GG::D{7೎Zva&&:E{Ud4: JE09aY{- ʞG,*MhBnCԐs9O â䂞5 (wOcDX˘O%> %8BB? q9ry&9Pu=*y~0dd+Ѐ!O9 w7tޘ8SSp Ȣl{w:(Ua8 )&s8OX-s #RanĎO!e@7aB7HtNX >2# M䨣94=zdl #RNJs faJ#2!])Dµ|@zm3R5dow4#ڏȐtǣm4(C:'l@"=PBd"Bifgawh͆W)Q:ŁKy@xc#Ta:2Ӫdt4Km5-e*#d`&Qdn֣2A\8gPԐ$&DNW%2^(S 2] xay{ P _{t_jH kԭNH Efp6[EUd! T?y D*(@T9%$ H VEߓݿ?_o^jtkTDm}h mh5%V2"(ѣ.hwAjyq:EjC$*StPK|lB058BnmQAᡔgZ=]gsU&"d P]B*]; 6ЩD4ߞPpZ"bj[ЭuRQ6(k kfcٴF/woyPZ,väDy Xk!9ul29]^܈%ܢY.bLvP%lP Q)6'2(:HqXYaKl+dOXW&Yǘ[iP' WB`~ۅڳ Q1bp0uCO((ƈMSEEHLzށ U l^wbhTE Aͩ- X{um٭ EuЬUmS3emdLA]hCMuZ"v *U!*Z>@ւ L^uE D+ (|<Y8P(-l mB\Y+z.$VOhJ7aF;k|T(Z{*(=uGm(M.⌑4 t h zu:Mદ5 "&b9&!;6ƒf5$*"T$ 8D׻ %>tR@Tc4'P_#t+<.8!= R GV/ìmz+z/ԔZ[Ue vSPJCP^~EХ%W -*[ Tjo|rL{qXoS9l6iz:ݿ|nnUEulytz=FogKr~qTT</J W˗R,N{}+!mұOe9POU[gj][ 3uZXXJI䙑 nDN Z5b4/"jN Bi[>(Mv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';]'*J0'86;҂@GB: @b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; Nu! Q ġGt&0';x'-E:F'mb'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v(g%G"9hCwʎ@ "@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; tǸf4%}7.!{ƥc0.}1$]pPY.`,tE+ RY>qۡ+GqN\{t禫~p~/]CU/A L]tezP7t _6O) uM(W1{6Su QsoLgV!(߾r\_ћS 3b"j{?o>GmP7jy>]5T}g\ͽsZ8a˫Y>E.V?10N3LP\fj]^@~UAٳy;Ǥ}e $_l&B^;fxixw=ڬ6lj<ϼ/OiPD`ٺ*H`ljۯ`ivfD رBu,̱n8@.1&".n ]~tE(;xJ[!t:KW ]\BW֙ʾtE ]0 ޖ1t"c+UV0"R~P:tЕ}STaYR)Ό9TK &ܯݧݶkQx}|Z]oUXmiz߹7'{5(q7byn6=%p79ԣ1lޓ{-]i$6jèƗ0]_S,ޓliWvrd눥؊JkTun+:ۨnsRFֱWGvׄ!\SӀ5U AնD:gZke{JWK\?xCy;J,Wr_;k#g9iu׫K@g/:[WtRPجWjs$otlzV?ҋ[:D?~~H;+uz'BvYn=g se'HVmg'uj:'\Ur 㿟oz5zx fTΪxrRtT s2ζ4ZK%HtN#JtTMXXVǡ':B9v$)MxD:ھu{R]}xlGdu -S`][#d1ݽO;W@~ #'o'hO}t>sl{򎨱R)uV<.|~Ӌ/sZ]1lE5N Yl;Xw]wͲ1=Bil:IhlX46uaPv^x4cf<,S& ֹ⡹-8܆ JCQ nz{z[}Tnb9|#PZܝkz襁ǔՃ)b+uo}ӓw38i[GF$pϢ$<}#D`"N`:BdTu mZTbZxogof-5pVb0bwӓwyDR-oEeS':D'(n9%AwڱԽݘa(# (H &~} Tf ("QFLna,b)Bi4#SVD%}͋ۗo#TU֙{w}7"%7~EPZtӓwi+ J=BT?̪|*M4q/7:+?츲;BB?o:{*e=ju&/.zjZ._nn %_+Cy wŒSK;ﺧQ=t8"t=Xt:}*@)g~:&S#+dh:5"aZ2tutC?{Zr,tEh @i\"Cz< "Q*ӕ{лS!^}nC?OztsJvzЕcꡧzc ?a*}P}vhw|/v˷vP}ud%mZ.I AU2`L׻4+>t_'riU~Xmhz/~>W@g3{QC[}qV:t3އ OݱSjg)&w'h}doIɭ8'F/`(%u$Mp u8xCVQt8ZEˁBtu|t%4VWv2ikL 9yj:A]ˇΞ(ۭVWD^wWkvLӟ ||}156s~ZϴQw/OJW6FS6nv,eU~e\.lʏi_h7"a4<ҁGF~z,trh3]= ]p#+h pxABOWrh/`zZGDW;~4tEpu ]Z;x"c+u~DtNb,tEh:]JKHW^QWkh ztN#` cRWx֮npc@ 'GIW][ٗ3"J<]J.!z* sodW?~h3t+}LW_=Nu ]gV:f )v{dw1_̧eBwm9 1MM~˯n5уMw_acܮx-&ZVE2IĪMZeUj| 'R7of<ےU k*8cL2ݫ]2\ ހ>^SSS ^Ęi0ޚ*j[2kmg gzdu㌏9G|Jn$$ARU}[[ݤ~9\=bi#ktIj=ziKH͔*43=C scbnX5TUs)F%(#Zk111HQx*ܳDrbВ?,Ǣ+kbhr+F aDWCW ׁbw(s;WY">G&͝.x)thQNW2BW+ւ芇r !wbSԅCW-hDW 8bAF$֖`pt}p `ox2\!T;PŻ"]`tp]1\Rr+FiJjtDW x?࢓BW6Qr롫0RaW1mٝa_}wE"-;A"#OFnS%$Y6l}`qip=5< mR9 en~CHBzBDW8#.)thʝU "³6։+Q ]1r+BiA ]Y?GMcb+B:{QB,t5Arʚu^ ]1AB\)D6+~. .8)th2 &HW^$jgJ+Bk!{b*hZ]1 nPbnkp܆ zshT;>g]1ʈ#Uw1Jܶ[.v[ I(5d6.&U,t`գ=7jG}y}KRvÙDhIJݑ6#<+ L9$rN)ZHҬfhQHNY%&cr"9^V1J%`$ǃ6= +~>9 npRQ:]jtel6+,++ǻ"OkoNWw5Eh#0('.)thΝBW+| Z]9T"Zy)thɝ%BW+DJRo}@K\g٧en ]= ]yt%8 .b!{bS|%$bz?d<"]ENH":y2HpA-P[N|5tjA0-?vߺ]CC4ir.2s#S>?-)0nln`oaipݖPn;}MP׀P{naN`g.8%<4LbO"6bF-;]1J&HWVQHiuV ]1+@iSjtjQtEQP0HpCtR)ʝ%w5EB9vKx5{Wҕk +J ]>wE( ]M /C9 Í^ ]Z|*QT$'hr+2Z]1JSR_]鑪׻n?w9 >OWipv;9E%* ɫ5 ҅zt!d ]E zAtEA1tp`цAFKCW0RRTfAtE5bᢑBW6PF&HW@5 Qm?HBl̝ʐ$vN ]1\/ƻ")ЙtF+J ]1D ]M\${At帥`Rn6 / ct4K0y]Zlt(䮦HWAAd0(CW eOW2B)UT$G%'pAL>sQzSꫡ+3RfWT{WI-lۻJkI m$MA ]Zn9]1JSRS+ "n_(-B̝e9A3M 4PV=XlW"+Ezu[tg;?SEy5ӛ…!HvYֿgunn _x`^v8 }Zo.ueēgo˓599ZkK%~o QNjA?ߝ?Da73ya98mfV/ff'G{=7|~Z!`l`;ڹW]mmSم%f6Qz\yɀ|1CtΩS1~I }u:N OGy>h+r:7XkOHQg^×4w})io=z_|yU|I6txQ#큃Jlڑd{ Yh9'@Azvc) Pf_}v1{&ṃ Of>j\KMY eL_ŅzlU Zoߵ'/݆'1T=tguAJ۔%,z扖TkJg'ܶJz5S*AsEO,#iJk  ==sޝ{SKX]>5Up%w˽x[u^U޷goS[|SYELTt㨨O+6h՗Ӆ? Wv*RGᲖgy9e}]d*ez+/^i@cvDTuLmpxtQAEV+UtwjRn1[k"hU`裪_#64]4 6u]` Z4ւՃ(~Яӷ!=?FU]"w4IǼt^c6[~+x 3YyqVW걿~ ᳝lu~/iS=ѫzgNV??]`ԔQin>?VN[vz׼SیFږ;L1'W[}w3EzALdKJ䶟dd7@ gFݎLrY?'BY  ]_'*(vN^af{=Kڻ- IὟzARtU;eյZѻtMv#X.47ruGu!ku͊.- k #N)mQq*"Luf]k;`VWT n7 .3TQwn}ˣaq_16x5 ߳}{5[*wS:Ԟ"Ú? aXL?UmS*ZcB60uPo|?7*"~3߮%}3-.zmߎw^UC6?U?t$/ˮjE^)^~m;\_"͘5ٸv׍ZhTG{< 䜟If8/(>MQH|9^ݖ AMRjiErNiZFS֯'2a\o_}$xu[Yiyݭ}A}$a'jIA)I&\&\/YQXBiWluS EklYiEvVjQ T@y f&XǍ N7K/㓷hj7ܼ"عjsu="AXc6Xнz03F1`CzVQםۅltU ڸF+M-`軣bW5Zy6r 'o-Eh=@ IZv֣f FXç:DcS*3L1hk.qy_LLp5AqۆFv0v]1aZbd^66.HiI`l// #/FSOII0TOgdRE)ra4yWp) Iz-&%bbRRIS|Y-hxfϢYὢ*qqpH p̮6k\Qg=ǎ< ʖ Hk&r|֑mWʁ$3lndYbVķ˃OiIACt+ 7Lep1n晳{͸g[wqYh^['! IFF+a$r{fE9,b|p2o\ /o}lzW&~ܗ/.h 1~+;LA"(\ain&ETNbY;X:x1zB[I 9cΗ2^Ei7z̔ XiKWF2 * LAf2+~]4|Y&uY-}IUGu>xf~# *eR4db>QGҁr]+qx`&)](@Πa0_~o߷drÔFj0-_evTh0pk1wvN׿d4vpLZJAu1>5RU)חx#m:ܯLQ6%2q p&@ M@Y"4k6}חj(>]V\ǃa7:.:K6iu%Ȍ)?MO gWDĀrC9!GBp+^>=~u~_#'|8g?&{Y0R -94 P>=8N/Yl1V7p\!AA!D ІP#(ΊԷV2>n9y D?6clڀNtBTlټv~IHA'Bqi8Jmb ʔM^5:@tpP &h\ x4`N]6e6[CWDB,DQ5_s87z.l[*_xyN'0ߥaC>?dZ޴J3lPةS&+uݙc Q'wm -2]]qTFɷbܱmJJ=-m/RϩC߄ߝ^ʆ>EI OHxT/{*|3??̎S' A8ڦ΀t*1oQA,}!Q!v1E=Ny?1>]#v؍48/4U^b^,f_3&s-9 )cL&- Fbv>$^hͲo$u:U!qd ɒ U"PuI:/U.d 3B6.gJDMw;Ɏ`R*+eMh?ͻ:m$ H(P ʝz*tPK./$,lD1Z<v6yQ/gk)Jef1jUlA;NPbdG-9p\G#qEjpRJmpU JKɡ IJO/@T?A^="2jl63 ^-4q`T؝QjGqM\4ڍHn|vǺA $#;(*۽pG h qK 3d.U̫-' 8c0B!sr>Ee sn}69?4}}2AYJS쓏!O_BP,GrWwu&Ka<]3Eu *ۉ] %bs>[{b>t$)ۜx)ũzڄ8%G9Wk?=e ']Da#g{|q4^TFp؁_j o#C\šMwU̝U\`4X zcjLUy_3|Qݚ, ܹ=4#QF $J,0! {S\4ĆKRCubN'ÁId" +Ԉd4@(c@v${v2=|H-nِߌ}φ$y)E5 2c@Y98RwDh8X]DCwULUv8-F U(e?L"to~isha`V :i<^=Hc-oV+^bMXg3ۋD u>FDeI k*-Piw!/,JèȰ }lwwG83b0GoMci,%Uu+c_Qi"x+'(M%;'O 4/5R #an`fawU]ӭLeU肫 0]zE"rcDIr l)=&PQ-y)@RCnÎXO1EJhy_v L] Lݥ.qr|0i&3%P&ew{.c/W'zV_z9;'¬'QPUKv5Ca5e jDUHx6TW% dē"[9dٙkxˆO'H,kbJY`oVGr)C?r 'qڮ 'p^_#ڿJWr",bْ"t#W݀r܌q)hUz9!#t ӤSxRf8LX ,ĒѵVE;='skB"$Z{ +F"HXdq-_a"EZΎQӻ9Ʌ:zmA0c*8!n$o FjFz归5%R>.15_?1Sg8u9^*QPxZ UƶI/}됼%xSaYvlq>3G:X.bz2 F Z=}kEhSCݧ boC9$ELGy>m:I>rtԢlﺐ>v5^> JXM+GFCM߰ qѻ˫JAZh~wOg4 ̪F@Bd's^NV S!8fn|Ɂ r^rWTCDz- b\^OJM3fjƚo'_󆙸;G.*t7Ul?,ŏO`ۿ UZ '0/' 2= Dħ2 D|N!/lVf{ %(f,M!uW=SUqu0U%# ?U(V뷗P置2J0 ݷDJXb'K"DHH8/{q/| !,%pa/2*zR(]ObŅ0'?&ݯn>y◊.ϕ/t:ki֚3nDi˿N@9Ory6u9թj.֌z=cd\응-$ 󔑭M%<ɧW>z]OT`{n6ï Ɇ>z d닚Ns6f{a[\d5ms۫-E=pw+,"=︂ўjDbUn0t^ ~ ^%=Z@Yh;~H6uw7 L=qh7F;Эma^5[ L*Urzj <_e1,jSGY:s:y{^zyk$*/uc).v۾+' pb^ \A gg ưcHnUca:SFBҶn w57ԒWќ*IU Nss_,:ͬit͋2{ZۄRۂ,#~|9|3S$$JD![0dLޫ"r$\2񪼕tbB"īn9&Dމ8lvh yH2@AN0bCKj)5  ! ~h;DK1@`l.XDtFkD.a6IGop)q=ruvxvp0H`7r=>e'<4V+4httͫa}MoVZ+Z3$ܼj5R!~U,F CҭOCӂ2UwƂ(y ;{D!mY:DvBüv-бW+#:uj=Q[1nQ 5z~4&:&[YKLћ_kXy=@Y[EdRW0 }l\>y'Y>B1ZR0tɸ}::K"]QTO*gXB*حqQ 8[m2^1n }?ȍsk\Dǰ󿡏kKL{%DwAIjyĚB යrLs@k xtQo Dp~|ޱz ׾[(.#i)A2#E g? MmNJ/Jc$;HWA͕.z*d2A! z>]עwQD.!*?i``Dž m[PǣfT/$ĂKɅ;{3jY xhqs46ѳÑa;:]rh:-NMFd2a)Q ̼E&vf$*2*)%@9TZZͶ{0ԡa$jAJ,8gheXv^~}(\Pd~̰ %JZgߝ^؄Jm^}voB"'W(⍓~}UNc!aA$ѺgvX{c%,ohPci飧EJTȬˮu=2J/h[1[;s#ݚHq1]HrN4)lj=5GX :|rxEVZ5iFZAxE"+{X46S$C%XRU1Ӫ98-4nV8Xf9X=*oc_m?~ Vj:pZUӇn Р)ե 6dWrӿ-fTu[<`J2L7UGkU- %(纚ϪYps-#. tvWO ~Kgu%?U $,]>NL,|4̹.[ۻ"?=DH@pqͩ1&ymO:y7j7J*I hd{yby&t_b'.M8";Q;q4~ :6ul4T U8IPL >+eh,ڱ'£^# &QRcS&V*ޱ&X?}z;B ~vѩ$%aFcTPKqƅoDŽ]zX|E\>:AYg Wd"M{hE su=s`\BthXr[qKOǶv1ccf9&}>շIVNj~Tg1fi)$cS?57TGiLVIBd@q$Fۈ| [T.%s7.{-_pnR$!Fb s@QN{ p#p4N]x߂h8)Ii`Lc҂Iy(/b(c2vֈbre4fPJ.XΌ6lWkuc%ͷƊ,S)L~+/mֽjHz _=W~w,QƱyiY%ib~s<]ѬxvGpI~f[;+yQ{]9Iyj~ӉyI*i&vQ1KZ9-wL*j~l0|t}gTOe>IAD$ys\)EH$6fȱ;90 ˚<)kdpe3뗈 G@1TER%朁'9خOySA&f3"ڸ٪:ʏ}z.L/uhp c]mVb(i#0ˆS ܄Fħ$yҧs}BS2٦OYK1A1sNZ (2\(&H$Ǐc]{G3HqaEzx+\?`džcz9W˗ɳj1K.c>lP-gžW8)eU`iD@@qH Ƥ}E#? Rf/!;KՏE*Vtb:+Snи)&cb^i{扨;-%]RR,Щuvk8Ei'9FgFr7t#t]pŒ)Vuq9s%o=TC-m]K^k G DXӇr6Ϧ7҇(Wrkoʽ73[z~`f0ư#w.11Ǐ.QF 1h?]0{f6-D-4G8D1+|/*J7[1[n3i1\D3Oe詣AhEEV  =~zLWq@2x4H[r <ȌJlգu OƄDj[-S== :J%Ș.{ʼ"0z*U;CoUAa@- $0tEh@6e V> *uwrvZWgbfE aMP28{:s7oCfƈƝa @%aEֆm(hdRDa 5^۳5捅aC7?l`c@I@2H9<#o<) ̈́!DqFUaէ2C(xd/ԙ6=jy!D*,@H1YY!bR;1׿=go6꫽g񜳰ÀlB2[6?m_6'u#`l}%2% %( Balgy +^w^lL gP}S'۩GAK,3v8k3"d쁎߀7+)y8Z}+m8p'na*ey^RkN8z=q@ǹnd!g qJ$oDoGtb}*WP&{b7m$+#FiPlZZtb6EƁݸb#R$|%'  8oUdKvK$J;,Uѣ#H >vs̑tc0(b?k X?/BKF8n&BB8oY{fUں?PVcuc"8UG b_}oQuUްD|8"}pU*Ÿ Clzq;b$⃈,tqxRdx:ޓu$W;`{`] >%ZO5Ix|)8J2aF( ZtD6?v3?f~˪[6V49H 9p Y[_1cSG. m^TVYlV5'H盧86zxGVr:2XI;r2ڔ : Zzm ¿Fȶi$)smo%"Ҹ# ; L=Cc2DsJ112+C2 G"3<`U@ݮ[՜X"95뜥Ú zbۢwnlsK$ึ)[ǫ/cRhvz'Y 7)p@bBz*]DtFГ̃̒Q*aZUmWZq:uPP/qʹVJ˪"|ghɴ:H|$i:&x@iT4W`(5W[`l6B6h3uꌪ-wd>906 #hh~X;.ݨ Nп NfzH@w~7e静+0OTq.HIp)ƾc@ zUn4ASCB)*jexpJxέRTŎ:G}Hy?K/Пw+n=඗~3a轟ø;8o{d:_;w#+п3Mo&uZ?Tho`#Q=CTZ%EÖsLyB##/ۮOf!FjVNx*iwBik)8G Ӵ^<8:uݒi/|W)4,k?l^*W饽_^}O7=[ ҷ? +0Y*Z#`)R1p1)w)(` A$zXraH/eY0:?5_`ʈ2a*Ze֯h"3Z iU 2 AU$/\)lz"DtuDK bXܒtE LAU v(DE - 2X^v`ܦ2jU5ܭ!dItB4W#, 2WS LLzk$bI nr kH$hJɳxx a p$IFn%׳1=Dk5wN&;!e Np\^xȳo7a}!lr TO T\egB#ZHHT s691,~#UlDR@EIvtS&)c,>P jvf#@P2?>Y#m9P2~I S[?Yz&kN#= }R—Tz­2{^W.2.a7g@=?hJFRnCjOWأOZt gtT5eӅѻmy6G&H]-Gpat3$]=בT9*8j;zO7nj&n.f qn?œmpkYH ,ZP`Ӳ/|DsvzgU3Qm҈QFrK2R6(Ǩ=];m-lƅTn㾐:N.<ˮ0LM=^Oꓓ>nm &dhJT6,U%BlT2F(XuIqg#\.$uą4YYHcs vijyiL6YmxhY%k)6Ve:z-s y5o!uňIK.m»lyjTwD Ė)FӼ bUH-۬Gq{GcZ*)>"w8Dvv*JgDUQ$y5=ڔQj(g1΂-I&j( WݏZwq[^ 7U*W:ʏWr^oe lKWH'y>&/ eBΏs F/JiHwvP0q x`mTm?'%gqc /*i=y @\Mr( 0?j{+~gA||66ވ6zQX>,]:#ϞrZb"#Ce>ݧt~ϧaYI $y)U t#%#iw,*Zbdm(T@)<\i(o` 춮VX[QN " ;ߛJ lRhudl NiGFPTxL"~CJY v|Y`b;"ȶcdys7NImw.XM"|[ndZ.:.¶ًy㝂@U7|q!wLȜ h;nx汼copC\.RX~KOc)X Rc1nC+lǐfȡ`PJg7iDEEl-x-qi y-<Eg>`pv11ӎ? 2[Xnnn )#E^m[h5NDIOeyJ(P6ͪ#UB#JhF3ti/,c%VU3eaJM IɊ&I77)}튲IJ2j5*(HG.Ԓ~*Td~֤}/r'mIWEPjR%e3dԫF9۫&EW7iWwͽ՟EU+)Wеi,͏#]l M5`nشz" v? U${'f֨wcy]B sZ1:W%-ŢRRB>,m^r?hrTKUp6/Nɝzl{&˫^K2)k6/ԐDtdbP;4F_B%X"ziiI1F4}+qLIoq}#U,/CT4)DAL`5ƛ(b&Ed$, S+`R@EI $^3v^UZ0gceȟZS1I IJq"qRJcr$; BN=Q ;RDjq)"9.\Bu\/TFH[d%ǡRSr#ONT*ɰ' ڽO* ]զV'%+*6K=VQqҬR ౻P931û5g,ݖ)㒵:z%T X OS2:QUF:H]tmj0sTqV+`XiTő8o{D8&m9O0[,k*9m&NNto2XbUXauHZNxqSq+uu;Wb%B-b Z(9}O2?Ō7U@V{d3 TZ+pW-=9:5 4/NNjyAH7L[\g;͍$Np"rGDdƤfTDŨ|JUiwUYUY=#`bּ=,nhbRRGᔎne^^; @(Ӧt(Q6`}я4omU7v{29Vf7!`9"fXV>p*\-cl,5+vNg·}o´8pwy;Ɂ`8<:s붚M&ǛSL8-&s$4nL 7xX+=w;ݞg-1.d(g۬=у%W$qfmg=ᭅfpnx0[+z8kc`VX@av k6\*-nZ't5220}z4ZqWbC)̎!`J2<&,0Ef7 (뭘`d$9N.\i U}Ⱥ=#=R) H^^}}T&mj2;d 3%#HVFhu#n+`, H?|"?vPmZ2E_ ̎;DF9JHP >ilwC,+Yс]2=kuNy-?;B=!?Y,,-c8΍.P$}r6͇Fq/=S::X}# hwhІF_OSۂ)gG['Z|yBO3HTfzpzӢdc*Jf>|풙vpihpdž5`',RXp16P'$U}Uw#1VX Pe.}*~-cmz2[zj" 9\+O_we;RK_CH:d/Taʆ=(cqY2Yu:P[IH!oM8B*ql9?x;_cEm ejP/64W׳s[CeƋ;)K~2oXpz=Re`Q(1YVaɓ1~LEST]ĸ8]dN&u6*܈Pln]A`LRJ56i^=yx>!w.k6wbcӓk:qͪ<3Y% HXQx0h@ILl>GFyrM'nv䌶giGFl__O|;N!(dCj>m { pz>4Onn`?# շvH֑k=՝:3YK#][EQ:eNy:[MtZu Xu/V%69lHH@<5rkpتF?zZg' t߿ԼK[&uYpsn_"qy d[ZI" ӓy/ O&S0E R1_0;wpFr-ah/x?I{{NT]|>${wlX%>LUm,E|{o3=ݴL44NOO`[wcS_4]Y$- '0fva ý.5f6/if_x.L o= !$L]ghLO+ݫij>(Frzf.k## s;3u,I,FK@3܎bxU>_ǫ6SΐSu:|p~`*SHhT^5o0& SsӁ"2"5v|rM>-?֦Ѷaz"G>_]R32^`%p!U# ;&j,y.tE8Ѩr((A5Qm+6 r9Z{p/XԘfP@ˏ~yb:2`oo@X-$ft=@Pɤʊ/9g/,1_]0__筘q/b95H^F7[[~_e'w=l6xD}*XuEaȉg "["!fWmgx7;×{i~Yxus.=yqkYW)/zy$a$2ߢ#TB%lj%Ta3$#F'g%b[@]R|L{}6%jWꓢXEtC%&J`K}̾R(. 8ݼMN2صyP ?f ur;&:;!@%Ȼ"B Q#RC->&[{A`1l W rW$P-ݓd75M7~}z~ÃoS~9lOgnJÉx.ĵ8?GQt"]`P|a`#J)b|^BilSj@Z\+<8p SSvBr^oW wD?c@iμZ6\nD O-B"IRS}iQ,\vqof Ym1ҕ` ~q٨xatjI!!/BnϤTBNbq}X OWmʔ䉛֛>saeJAT]0tEPM06,źW~E//kz*Z cQԺlP0 keA0+sjXSHPQɄN `W#i&a.t,kn¸ӄ\m=Ӭ}v΅Ca#S;5obږ B=+ΞMk^06-`÷Yn闩TVu ܨiJLS&kk~ ]XyKLo6Pi5_3B4ԌN0k|kb_sHDkJu=Jq*Um҄5%=S8F 5ֵ=Nm:tĨჽ'k5fjx Kb)Ӕvzr|k~ќ;#Bd4»V,dӌJɇswV?|zt'YdZJŵ:{z(fJ2s^KFZJe LaJ7oAZ2a eT+#.ڒKLˇ60ZY)S{S0VȔc0\=!4:CԊ%A>LnJD ):s(xe\t/y|¼- >CYԴ1M A`7h1 VWg~$ K}(ֻ#2S:]Ya]Tq t# "sh$19+,oqLmS)a":eM:Db.Р8J&YayWocy0w@j3&XӀ+(X-w9 r lTNICD8(R:H5^ДgR51_%pg qH4t=竲vjFSx83e&T0ib IĤd#p&D[1(!Z({-- l$TW.ngpiu5k`ppk!rO33UgƸUr]U*ims&0:S_3|35g']LVIDә+#H""/}`¯b ig^A2'qlIb_ײȾJ:+X,!U*r*8£K(vA_el a+=hϙrBﶗG_]0!YpQM*GyO<*w }MVrӮ7UQiXT{6!BL1v:3_\*k1G֫XT @|0iL2WDސ!On /Z]Dg<9u7fxyX=*dcE53W.ꢋ5ԕ=9=N=կ_.p.KI #uvϖ/SWF?g`^?>la:[㽵ƃkeylOZ?;c\}QdZFvnx/_US _x,^쯍cN__k%ntI٦}:Z4x~|B^~c2~xjRMgwh@J饻BN@,D0-U1 Ė; 6 ]& 'j0HQ_XcWrLݚHE$jvUQ+əҺh I.H\)&TtbcdSֽ{j^lXLN:9"5V<ó^Ö7 Q_ʚLcpTij&A Lso$g>RǍ%tb)>4=Eۇ;dH}gQ_|^sGө+0 zYo!0I=Ix7>z;343*bAzVX2cUYԲ]:Pp`e5a4&s:uS jUڪrjx|wXg7h>1?ybvk噆TD I1þt*ݜT_H4%b`a٫קo  iuiP)ĥPcGs4;u+ .yfcp<ڱϘ>v2~b\yW~_,ORysy diWau%.챧2W<0֮1?ٟzyž};_Ooq͉WSbgGG0/Q;5Q:]" }_/g񀷞|HLYq ћs{Z"? %KsGξ9pmJ m=J|c#g;rv}}Ϫ˗NΣ𹃽kaEDUKGW78޸ۜ=BgO/جӡ>{k&˨xZE06%b0waY":4 rzrjGLBaO;NiG ?QB nxe.#̚4 v6s ;.󄸌lhf⸃v' BxJn_6v==h/ ]a N'oh,:M?HHv9х}}ʛ1K.\(=d4pC: %Mbڅ>V^DqΔAF73rнK/ZJ8Qn[ !mZ[*:ٜAs8GvZ^W ^ëp³c~rG -:7(:Ee5ɑ-N<g`}!l\-JTvRd}ɥ$k}Ï#mss>8j-S/S9p&ZKzd2Og~nRrƜ5b'e9"=/b3}{ qN͔u5p5KId6KؔNlTcbŽuwٹ ͚sv*VUk(mTŚvx\Su *Drclޓa#)1Y1A.~?Cw kwÊ[Sֶ6&ijޕq@Ի :j]l83jfZ#5YgVZok܆fihv`VVAlC[VO)x_X1DVt3rWpru gpl" 8K?YDB~Q[8nq~H.~Cja,AlS1LFE]}yS_'6V77sg6xC9[zm_n #Ridv_u{qp^I%0Kt;@.D4zvow1dSG"lQ$VvwQ(7ل*iOvC,Ç7o j+nw U?ìiʧ*q`,6Q*pdV'z9einn~_Y1sApݍ;[U@f0dۑ?a8l;Rv@٭WS6#l2& 2f ,iOSbM64+8an3f`0ۻe9ĀCTś d#Nؖ6ԭi-©&dh /Wd EaP7=̲r#[Z{}V# [(l54 {΍[eۢ,ٶm"f>ht h6Wp0e1 zBwM ݪ^D@dc]x۶ Ͳms1$XyJ.m{52X8}գ.>q-]ێ[royh/U]SAޓ 0jɺXɱ{VSsp6lDQrk{uwZUu5[cG'w' #i=6$wZ#`Mk/5A@9Z&*0/Y*(*{9%)?,NYf2@t<´QvI-y|TO>;  ɦ0}*]FJn@.J㉀]]bהĂY&+`,L*sMC+m,bvF2b Ca~B,:aIA+\ ~yKa{w>v"0|n[qcfsNkxfS$x^L.AdCvM'A`{!w`'}jdfsDϧ}Χ}w~;e e@DJ4jE1FE2 N%SkG!%jŋQf;[h|xQʵ.1q?3%ȥv/ʐCjRP/~yaoKM6bPIx v=xOwC)m?xr+w,Oyz} ە[E'xb? i.ƭ!vlzbQSBs_: +t_lU^tutRutj>^øpKDƇ/9xb,2GrֆYsmEMnRE}o!jTw#Di RvǺ 2p^Ao$H AK y43Rv.ABD Ѳ}hga@=釙mU,<~N$|L|t1̮{Cpח<$X̬BT:V$y{aHHΧz J!&2ldLqp/V`lGwLRZv ǂEslTMtiYz Jv;Z 6ytΫ@%O8]~jRi~miY|7y4R٘HVabѳoX9:YVqӛ2fv㬁cq e:eM͜=(d-&2}N%-4b$2LVkIaQc!%x5@\c-0&xhal +>eoam9YˍxTm3)+ɌZlKه\Hvj9us8S6mF{91SY"4뚭\r }1{6; Y03C{PK@21v\=f8U'![5uSqG?wflqÂFHNq W0X:`ijY+sܫi.{v >W%1pF/vj e&8;˘ˁiȬD`6'dz3U35 ɗ]X/0,t[KR,k8{{\빀(JXIR k%il@7G} *sj¬YS $f+$ tVoogۙ2E5°InUKa(BPͨ?Y;w'%[(>KD^f,RfI-:DC2/n:]wsv CrB[K/ n]dM /ז{]^4dn粭ow$G+iiU8|k+ H֟#ʖ?cӻOv]ێa5 XIػRu2U&h'M}*Mm衳ՐV5((\!W-9G^b{؛\]l0̓6.;18oݗr۰fau'mP+!˧7n/GUd*d⣚Zb!_{8Ȝ "@X \SXeh*mLZ X4 U;(b/g&EKHo$A9'63 TE`fM| htzF07˫d?[jzlfc3l7=>-l-hIܵNy} //y>KWlCG.1`ңTpA v{{]7mᵍ:Ģ #l M+,A)1͑~X5pcZso0C$ڽ(~ M*PiI%a8]lB\lFZ6SkCrY~rOX̨gpn~/v Luَ\ u׹0SP̌y$2qc_=FJYC_( < &gL/M3X~ -5ΔFIݻL0~n%ivCƱ~]}oהIn3 iozcGH 8(3R փnk\R/p701|1w"lc^qL>þu+) :͛]tYe@K`"T`"MDșzkWmErKX+Jzp斤u{%_:VoMC'8NMr(XF y^(VF\GUmWzr-* [NB Ɛ, xCG_@y'Xvd,w=}vn;Մ~gn7' r)yֵ mMr!y}6];Ux ]AÜU0e)7K؏ͦIhsn N$QGl-w&?. I_wpB^'V58 yαxV|hEbX1Feg>䠛bt*lZZ2Ȉm;J!{ΗheryIzFN$Ojb-Lx 9O ;;P*|p;AQo$1T&h`_Y!0طr6}-Lv؜:)Y<^Je"sRz7)]+I )br6FFLƱڡ9232]l ,"[sT?X#JpejJQo'$\B_ӓ/VF'JU}OӦO\@ Xev7M6s RI=A!>Ry5k\Ns`D [YZПP-(eo;Z!Z?d~L T*`ZkA&vǫ;a /e0*/hJ9UP`5VZ:b8JK_-FáS&zV"$&hCYHjv?1N#H~uKd^CRĎݥ 3i[Լ rdN X o ZH˚~3|z`Hh* eW GTYB TKz:#P=~PQfM討q札M_n#W@q<*!tCçu,z"~%A3EΦz z%<8NszKdOʋԂυ*x\$[0(=8*YdaԾy7de)_YPGdFmR Q>̱wnb3l^!y2bxJS=|fnM94K.mm?%t;6og_Z;pAr%&τvܿ H' c%&75*]yv1yĤ=Oy# ;/5wgʱJh_x(I˹"qJ^6mD ٱ~vbnS PvځWV \P0C@K:*o#S&x{A~+vɉ2M&fWh)3`ˏ׎/1fK^,fS?_[MvmFzƇ4-ܘbaNouPp}ȹ\fZ}wzd93} aRn߽ߒpk@͆J8s}cԶs$Z|T)Wm=m\b(q`H{ڶ\$avhz.DDU"2݅Q 4AW;4U"2a߼p"5<˳<JD>qsm_5L=sԳ7>{r\G;`-lF.Ri[B'Xzrol#y;-K۾EYqOə*!$6I'등[otNI ,9uK#jTQ)qn>~,.?"m#~,%.uJzcH%1G`clվzS&V`RG?~{M?)˰V:4gg 2SFжMdRg4DOǘ;q)+Bb\Ͼw)"=..=V]{dPby7oz YO7KNC5`( }7˛gLK2J6w moLΩepEO1fqwd|U\_N_e yIS OOן綧EmP婉ICj`P$2nAu@ Z=$II7\UCh) ~\;uSWnBԳs &Goju[BzQc+[Q?^19(@f=~%N &tbsvV2q@hK1ncJ 6XV qb2W2a;8m~/c-)חoZs,_ .α]=irǝjp4^oVii8r&%3QQ!2(Ĕfᰢv 9vxe"TEQX˒ -t1͐3Yӝy *Ej֝ Jrf"hk3WFXָ1 0mђLƕ7fdF[{b'Pf\!s!Gc0rx_l韴+X) ,MCAP> Uwsɍ tyfjSش 9CP.%z?X6 X$!5X3:#b~1 `-!5ƺXO>r2| =7v,fXs 8>7;kG'|e؜A1[ ~PcR lTYX6QgcJZC#@jKek~ڈqdJ5Qn@ Lt;Ng4T*\^X@04ZSP3Ш} ._stv" gC1Xd,ۃЍ T=-DýbzҹuaL| n׵ÞW͐IPE!}%Qowm92'pUvҷU';A=F,WDj]F´RmR9n0qz*9T82'9Z%ʜB/YHȂkzl.@j{r3M(%26?4a7P[fU*6<;fΡԪ1߾>'2s83H= ׭m:ؿч@%O/c~y^nL+U͉̺Ր% V[Y,>ZNVБ˿t2Ͻ-??|,R=o)_KZ{+S|ZVrO7vDx|F}Z=>3xmX f>m$'pM ꂩ.i爜L\0Lu7sRAUӑ ;S#O f¾! ^8m0 ک5yLsEvM[> v,{.NÃ *.ȮC P"IEv/; ]/c6OR?B &M_xG=hWs>|^ ;tX|OFZ7ER +C6Q׉  ^rZ`a A.HA!3U2HN=h^V}cOuJs& N ݘ)N66:bq-~?8Nl7M,Muj2@b%hW1!:Ss=YjШ{sZaGеqYҐ]vf]>tjĂmƴaIRvڭ>T7vWFլ<Ԋӗf͇'vfXOkL:O?F:']/I5%ez'ͮ7f{YaTR Spc8܎_駺f&8?9;^蓮(/դu9g6r#l.J}e2DYҴnPsoqo9L{㚱9J#Y*M,\䚵дߪ0|y̩[AJJ9)PNRP&WeEjRxdBܽz-C|l]PDm+MbBV+R|#7#!@ӯ?_Ye`1D @'[|89?Q`y4x| ':jc15t:b+( l_Ed8U+]^yUHAXjJD:OQOu^|n°"nMpuSTx׬~f=jZwH[=qbȓ :VHy SI^#O'o Ocܦ#`G\fEcXby1t<{1ɉ9>';)iMUlv!,&j,k*ioQ+ ]v;pjKD,KlP6s&1O[ *IwۜN`^'0l6<ѝΡOўxGZƓzw)8֊Ҥ*UOT^v|#sQyGyNCmW6u'HSҫٱ9Y=@9Wb`-̎-2abvٵ*ctj}}L.r7p`Z0J$D[ }F9)[u_04<9s*Y+bV`.ɫDk7r{7H= |l2}B#.=CP\K{КR<8 U:_ڨ{uSc@(s/L`܅*]J-C*@Ua^$p4޺bvI_n/q+db6Gėm"J Y :3טNRye1̠Ғ,CnW`"˜+DӰ-2vgB^0Jg*IARVJrY/Iinu۷n[о{:lfc3c:؜*˱%FҳT 0ږnnMOE>[!bMiI,s~vVT9Ig?a`V ]`_0ӫf7Dw@>F tlFgjCزFyV5)2`zQЧړFYX:xX9-Ja|dJЩ]Lm7pLԎXb -0h)QD힀>9ݞPXaRG0Y+o{$.QDZ Tn}Ԏ͈tT~RȄ6cu]v ?ڽ-csC]l ʵx ]vm潆ٷY헖)DlN!bs Sfs.헼YO3%97PUU 24FS L+Sg=+Ȉs1gQ\F\_.fȿ@wy^T``}}߮|T:GIxˮn3+଼0 '2ߎFܽ;Jw/<&`ۻEhtF}^]Xw_8mV9*}Np6{{#Smռuo$a&cxAS 2HIgƬ<{N4NGK}P|YZBޓ(~//␉M'/-Ӟc&A}\ƣ%79-捦xb< %US*HG$)#pv@.,j%|{r}wƺ\5wxR/WΝxU]\t/>g;V4AbQ 3_*\JwK){m{횳jεo>|x«lK5a9ps/e'+n>.v hK~O^cu?/yS8V""\ͱ,Ayet ցi0b[1}\(oncn|S`%_qL+¥(/W(=z-6D8N#p֦FH?P :~.5=@VWݰ55{ͅhh Wno*>_ddiu:Yn6&3?©ѴUan3OPyȊzKsԎ|*JصDrBgJe*X%[ˑ4sZú[?EsEq8cNq߯,WJgd33zUdl4Yc+w?șFm@^),z7GճXbͺo0ȳÞ˜TFJDPR3}esOqJQ0E&;:{wy> }ލ'eC91f[S_Q֖|km=QA6VIȩ7sclԎ2gP |A Cs$_"PBNx÷dUjjԖY1Y&BI=Ԕz3@tqEoǿOM`xY׾ u~Ƥ"Vi!ZGoL*ki-ZTBBZ tUP%2%lD_h7|ź@qI}%ՙ? yWQEkXR̾Xㇽy; BtXJ_&)G_wF-qSAb @&rO GI H ŠO,o% qB"_ne`e6-e-e* x.Tzn,m޳VXG6_Jƃ?Ed4BAERbfCLZjE9CL17sPZTTsdŬ*l/׽Mnfv5b?^AlI4q;?erȁ䊐|xvLM$8t'Q;T+%c=e:p-(M$0s4^'4c)-E NdFva<XOWSGҌ$ˆN6t >D wGv<'1ZL2M0RY7@ƠۺSRMKf*Y>Leڴ5L+1h̼V.qTۋ>.ZСv7Hs; SVڛP߇>;LK-יыC`n "_ IC]\]6.f6qXM0H{,I4KKc\Y6s;s b?vrń F=I$Vm&|}?/H sbd7$ٛqB{聗nX I4u~ܿBsw ߛojt[d0M'>v߀`O7`hƖ,yA5<o`LI+j'InXYQ -9@Ֆg.v 09viV:iƠ2O[5jSBȜ0l.xD3F(4vp BYvY["ߴɪݱvv)`%'`F2-q̪]V}9[//p|sqg]@˧Qq%|_u}RtLIjjF5X"ވQXnm~ZHmXdI-'T8E+cnߛ-Sېo]:y XWe% DPU%uӚ)I<BdTFWJF `PڢYz)U$)MNX"e@AF*F +e7A(L=лh֑ ǃ :g .vQ26`,Y C8% bfA1S y+$we*B@y9w`TGfg0-0)F*.L*`APP:Eh-g^Z"*#SZun4".#w)M S(TyΈDn?Fv)?',._{(:89C5c=0Võr}VsX H3hRqII({KZw7{7e*Y,u&Zw"ԆN_w> lݶAͨߍZj_oͭ0/띹<3$T@> g[-ꜛ׋囕/+[[VRwdmIpLrPgk+[[Er!+填4z &EҊO>|(sM}; VM x mʂsp_hy5;x=tNDwK3KI H+=e|lTy3K΄vSz}q(9M)ypr-7#4K¼G|nCK/S.5n޵bqO˿ ?*;@qv[m9\{$mXjw-H@ީ/ YXT0U-!ۼNQmWJ2|Z=ZfAO& ;HJd/آY[Vl;5IljB63CIV4ceZ'PHr Mc5GfFnA\(ij~aAۘ)XI} Z&a9$ϒKe>/j䛵r+BH2;V:qGbdi-Kyg4 owyGȍ[ɯټm̳ban v<e lG v7^Z2fkY5ZrnFTXCTh.5*PFϻk< 9]Vk(=5/˘chۂ!F$ow7Xj 3y#3VO׳X6۫5 P?AowuOWv٨PYmvRU;?GorMWaNUۭtu>tu=jomWRNܰܓMiVM }gZ"폶1)yqYn<&&W:O"|1S BxP(0J:H?NyKt ip``w0vo0ҞRɈԛ]yg*YCڢչ]}{Nzk?))чrmof*%({1k'o>w#w~b7uy7{2ޘ7. Ju Լcjv@zG$^|׉G?:@cfɞH$bGڤT.SԳ)78}VҶl6҆5Hisg3]^"ѡ$59z4a}N~6&pC0&H˟ %<{})mns+}v!K9"fg2MLi8tT]0* 80ɠexvdۿ/e2gGhP bxl_4,.ԧㇼyq_hSJApCF DxP_qk7r5U4Av (-UF%-GKgT/d1v*:kל _<"85N^ /ƺ*e(Htx/:@[) dE9 ë&bdWRBh pK!be+0ѻ1#iaJ(JQ0-Uȹ^pU[U Jt>F6zt^t3KE6(vtARl(V^l%d>:N̍\E[-SĦL8I7vN=[ţ ׇ#ptӟS5_gerxFDfel%;0ݶ4=.昤fe5/ KQ$QV06*pYV iNs+ Bn| !,;Ϯ4q8aWpkMA m*V9!7dӺ. ͸"\LqҖb(K=pft\ilTz00nWzVm݋k+ֈMо2BE:! I/BՋ2TtJYjGʈQڬC HȄAfNHVnIQ *H]$:Bq߷1$!9$1)JI nFwO k|g+&A{/|}qOLMjgje$iC2VBܱ,57 F$=35 #/Ljoc]1RqL1gjLmXZ%+HLEuqo]\[[ܿ@ e n`FR J-1ƥTR0ϭ@g9BEr;3k~ʉ)Z3ESVu6T$nl S"s=ϗDHs*YSNOщr"%)V,h2(!‚f?)!Y'aץZjd݆b47eQgT R6M$|5*GM${# _{2+A8kyr#=vԊdqM; ]-ɦ]TݼL)."e춈7e-;v/ɲ*DӈrAr;[v;DH-)t~CvC_7K)AJz z, Yu-"*@ } 6Y?x`+ٍ_9Gv7o~ZEy?cBr -Z1t,]S PuNIS]-Mmؘ]sYȨIŞJ,`nRaTz{ƒSkgiz>WOPZ'б6De:q6=ы*FL99#1 [汮 m.v}sd0G"b|SP+%r fH]JNu}&%xˣ|Y ||O$%=->H@O< oga VLD !8<W-q"#HrĒQ)fsY R|D0O62 ^m 2Op-YDmjJfH ^H{JZчh0Yct^iCiigvcc/c 4M[ Yb#fࠗ֘$v = d82Y3mSPVTk.d8͎c|SL󣩈y U,]BE,]BEvJh s=:EFjDKL<@. d-h` ,zQJU̅@rq1s42p%4.FEU Q,QGkWUw/g[gwWZmd.=XQף?|ri:ovq+'颳;nkrL~th-h+8-\a"s Kɕ'=]H$W%KOiH"ΖwՄ7<92!ӌ6 nỰ8|7[ԹqxdvNĽr0{jHL`M[!FQWKv7Ir匧0yQ3$pHV=4jJ\l:kgېy/{E}E^ z?[i-C!H)0?=^]LEN߬k0cp|5sOaOws\s_8p#s){y$edmЮSw9%Wgxƨ Ղk(`ڈ’iχZ XsӺY'Z} n1~]tp˅:<~|_Sk1\̿諳t?'ӃZl]I-&&<~|ꙡ)Wլ-?|ۋgߟ}?ޞؘFfg~B"Yk; ޗ"a ,gN<8+2` M avbUYHl4FIۇ?=7!a) &Q ,R 4 ^`A|!1T#p%Foԛ8CZzF<7ZvX C>! 􅲻! T `&%$X#8`w i%p q .H[]*ܦ5B$c Z- ]uk^#Z+>PDƬZ`GbZ+xfV*$I2+O?ecsuЭ9~14sE_mi|Ի_wB^b>b8?9Hlx#.r;̧{ b~{'= '1gAGchFz(#b=EeU n>s(NJĤw\QlBBiK]R*ģʝ ҿ\Zz-V :`I3nyN \8U`74JO-}I 1' ӨEpO8*(g[͆J.t׳s>N 5cS"Δhh*+\V8BV)Z@!Ru۰ E9c֚MԻԴW ~%bt0g=iuCbP+>(԰1bkn>_OnRsx\6pjg9;Fn&>k_uwҷ^WJx G@R*4]"(na̗Sw1B0Wx1xA@"0A/sYoK(/W%^< 4; J!)Ђ_C$ځ$9դn%FP`xJ+ę$2]&ɡc"%8|i~;)_P 82kAvgqǥZ3ŕ6vʌ{* l}BM@trzY܌Z"J6AJ@4Wb˔+Z%@%? n׵p'+ +p'hƝ>H|ߗUFY}q0g .b#ˆtoB z{"_ΐm5*^yLb7tQH_?4i5:8tX=3 H`BGkn镝x^{FR†RP"=r퉖hXdTLXj$4*#QԙΤdqRj~/a ,({ǂRV,B^cQXCICTs`H@DRc,uVn8 :A-iN{m4*cK _rozܦ%r" }w< nv}gP.E˹'17hq~(Fwr3!翚Š>f dHOXdh֙H%9LӖgU=U@ ~8Q\VvPCGc DjL [GY!@4"զ@Q A6JFWY6OnD44WJlZ7ye l1ʯ߼&*f$cJz&4Ϟ.͗}ctwfuc!|ȩy s>WTEPY_=7*?0+P)Ǟ&CQCړ Na7/uPh@j'm z.&$({,JSM+x-AɃ*[?{;UyV+]&X|h6|,a;PiMzz痢4w' 4?4@14?e>l!^Y0 -8eou0O~z|s7rvs8*X,/&&ҔRHeEpxPC;Ew m_EղNJK+U]yxZiϘb)pM"vOSt<ߧfESR{xW7ekiE?zWݎZvD]s۸WT0<}*f635I@%Es uPW 5D@ݿFhtgwC>N{$) )j'\ [n!D7>]^؉Jq[y00Z0(/iBoaYQލ\AT*Ŕ$+Ԩ9\F02!e!ZP(ёc#ezQ9B#pq=ɖ2\O^,ܻ +PR^CMakpO!XCukՀ- ӭֽ-6/ƣ _ÑJ$~ QF[i Wt䃼wWtv#jdzkvfc)MkR7,X3Ha;/\ -Ձ X"ZԌLQ5BŠ㤎ԘІW)4L')IFb|d~p!q7&lְK=G39ó٫{6{:񞙁h[.)SX`]R֊IYgv*DUFkIP"-GAb0 mPbD$ZaeD"k + CjJDIE ~%(1Z\/%XXyo*j@4"0 zz7/EPQVtiư"?m10β ̽tfUyn@e>~FEp*[Age~${m)({Yr&}v4Ng Gp &fFLrЩFJ5]g#`_Y趗i^FKA 5ȕ=]Qmh4%*NYc$?_44~^ţ^|F8Lj)V*9JN +3. ?-,o/ TNL;N '"bt :s 9:=+"dz>=>gvэ=ew ;s4yP?OO^&SϻO܅o\?U3qȦ|%ĺ %gt3\vSY6_ Lֽt~=?}OdKA O&>|9O 'YD]]:uVr>^~O^n?s]=u!9E# [j:[s6a,͖ҳELuylNή.&gse}{m'aYx6]8f#iA?rd|]Lqڅofl™RI S'Ϟ*^LA _8Yx#ͩ>I! ,B _F_34r~gj:{Ɠ`i42߿ҕ=`)(&'(ѧ' :ݎ]ҟxdr;}j D8fׯw To|r%>ʆ2-pfW:}>tLϿsR&%&?s(-jQ͚ٚ6Ot,xtUp7yo &ā D0KE$8V@BjsaSLO_Bt2(r˦]\l:sNzx fS >L̷֦a%_6M;\vYZ>² 3=#5e㾹ۍ ymX)hUa}[vz$NGM./А۷oݍ P"!XctH pcb% "mh1 Q~Ǥ,ҝcێܐ%#.CC v%Y%`,ugIvdgI$嵺$91Ì(`[n"ʈHi$ PLTgIY,-IqY3@JbRwEIc%/bRI-q&2 f\%@H>}NL.,IWd!Tvxo@V7[!efdW8|@I2}PJ<}(XyU*r8^ j. :}/KdQw4˙lQhd <;qO*R_r6g-f~N͒z5n5.m/e(wUqViuW{i)Dpw\ʾr||:.2dX6$iF@UY˭6J~ )pt>mjy!1~:TX%=S~v)N. 1KLuY-K[IaBb"G,1 yENYF<:6 8ED =sq7/>?zn0[/I?JXڃfxi#T;sTB ~m!"`ĻŲR0~txslr40k35! #,D\)㫄a-N$II$$B U#̌jkYZ1*dGF8ԠHG?5Lڒ*Qؓ?[vdgK-\aْ-ْm swؒ%D&,` q` ӆ1Β%P۾% .)D`+DJtVdgE"o?.qgEvVd+_]簥h*f L$&eB "$ S##, IUTJU3gUZ$EX$$!0&X1(Yd RPWZ-`!1^)H̓J\PK{;h^K44V8f$X45i[Ka,hFƆ \%h$PULBϠUG.'p? ?[$Ubd)!*B11q@WDF(DZB1,IhJ /TzBGת1HJКȏ<{Z] i{~W^l^O *YeΉ*PB`du?5T1l!IHcC)uIf "!ԌD"61g+`Vd +UH] ͫ1u~o8EGL]%)9 r|([ne~zcswJGaCP脇105"%<_0 #bjkB 1#0\gb`|OǨ 8o׵!Ż{ܬ095N)鎠xXSRBX05# J#F\?cRR*DZ4_ʻF_zyw䞇Y28ճֽ.S1ճg?&80&EKrtQEnk⡳ch ~{o^_<U=I3^FwOxf*ь/Ch:oK TQw_XSƔf+*2Ͻ39ARq*̓:ǦOU,chPP@WPP2>#SK#*7j MJC; Wb%-df WYCB=%1}&B`,:H 0ŃX⦈wwjHߟHOb`r%67gHΊXN71o)5IdRquQt3gW9xL;c,)B,ճfOHkGdEH<}ZOD1U׉.#Dz4cF gP,艻MT^N@,T`pֽKCśW`c4_]8CR *l_5+Ε "$m Q⧸׈5MUr?`@hl NK2Ki-#`p"7XkDrMf&@LdД7S"JP1Ln-17 O O"C4UPTʯfQrU`u!2U2~HArWJ}&ٽ8r d޹ibh mYy̿jS1ոqRZeƵSUʀąfҺJ Fց@1-L?&8*Իޅ;vgw.)׏"fOn<ܚGgFh2B@Cǫ̌ۃbU:iʨkFQ۽lh4A\u/& WU,,YK$~a*eŭ+ɕ*zͫs ⑋ |T#^]l*ymݨk)D9Uxqn$m酶R%iۓKҖLP꓾M ЈD2u;'YޜOX󂰱<DQXZi-%tǂ kf>/z?T-), qh` aID3AYm[pyVr0DaIKkԭooxÜպm4V:^xOdXH.#OwU?B9\s\3W~*CiҀwRJ+Tؒ1U1T)*뎜\`B  &pSuNc`QҔ<!x:w0H=~AuFC]F5 '}m5fwM ) b;mD+;}3kYu#FҍJ Tg '<0D݊*  HdT7H0ɚAlH!n+p!cG=XuKH ٔەM7j 8_ L4WG2JJ!Kl20E%p (:l B$"4''Q¬2(V 10BF!2lBc(SDc~-Qh̬"6IJAh E&q_d6qRil%&J^RydƬXU岀;+Ob(SbdG?UjZJ#Sk/ZR+B^+ED+SѮ *@X)UWj +U!Ri]g#b}ek6)!dʵ8|"zL)ڴpNt @"5<kPg]fcS2- I %k/Yo*4^sa䋗,z`0×1ܝ7ODJhv\3}8l(8V ìQHE:9qnnګ?擽D7lf+|ќ"qU@x5D/@D4Ui6RU>M5\N 1UE0k,\RA)E<04*XB4H.E;5'z6 `wA, R 9b{eʀ@MzRmC eԽp%a_#FGtjh'FG#"94kjM9bC2֗Mw$2o\J/MD\.ng~-ǃEu,dQ]'nQm泻“eP`邍:J U%HھtQiN^#Mooo/ޥ+ՅNS/z{|UL5%Cl^,/o3%/Ѣfo$zm/V".mtw'r -yמ{7ͫdڼDp{w2ݩs:ȶ|-\qVr]Z%uh<ˋQWYShZ(]PBԎʳJ.eRDUbJ̪#2Thli8 k?\`#Ս># e([%53GQ#dy A`˂3З1kԴ5cRB)NFWs5ge$LH͋[<ȳ2F]C T7"e>0 TarY;V0'%i?{2wZ/^'>lnC pJS7(-Q[zcu}݂7L 3VE1368x.-%^/jBD|\ٽ&'lkSCtCCfF0 sRaR)m2V ZY'@PG(n#eZ"Koj @c7$ 23!lz^Z!Avzӫg>zsxnEb5h֊l:k>@Z\\=Ś흇/u=U%JXBadJIOyqPbœbnSvgP-65_Dv2ˉhin[`y(P3&)1*hD$: a,+b z-7mjUjjSOvGſfɲNp?ht`Dl,T.nK/5&g%vVBpˊhyp.0V:E蜔1e`BmFTe@Sad'_IS@ *KYX2 2 0U%k6LԴUܕ"*oeKJH&LZ)7АO_u@q5Ny,? sc?.ݱ`"kUi^ZI)S@S@WVK0BpS 膽k[D%,ivc xiIjG}TPǍow+;:A`{ y<# QˤFrzT7y-_CC[mw1QG'Ezz.)굔%_d7;l+NH5 H~H3䙔KBi5Uj{.CNx~?x#+h'a-g]gx5\R+{@Hݬj6f@>U>odm᛬㬇eo_T#;qA2ȻqTipf믡\lNV$*Y oG]*th,{ZI@sW.у* Wx_BDˆnK_҃WB}ɽ@b1U&N%G)q'IiF|zY{vhv2+8ܻ靫s1 ȭgTcd̍peH# b>t^J fYEU%LASȹ/AB읏sܗJ̓D@ $Q(lT4u8YU ‡XjC54$) ck7}(%L0cN(Ax Y2N HYWQD0tڠc? E".1smaOZY5aY#4;WYu ^T?GȜ4Jԓ^;ɷ"Hw~6^2pUYIbKNKWެ*bhߜ>kcx4(Pޕ.eeiYVڐ1M] x'TCJ ٜ̜Lž[PTZu"s Hڗ1ԁRveLuwI ɪ _U}0dpDZ.TX*#"7:AP/S4v򟷾gAwl; o= fƍŽf6AJpq@:1WRU 2dB$} %"S,DHddvT=I97\Tka셡v* X܌}Ei -)N@ʐf9$"P2 M 3 faBV1EyQKX`E)aAv}}0Lx]P! HZmdI˔UJV32djJii "õ-.$X)g%g騊\G>-h@@ ̰1## ﳗJ![UE 6GKR\rr: > ږ1  ۳T)ZZaXٮ1[o6F:A`|y)2@M|2*ve0 sd^5+Cb \nw'Yh3̉!ib=V~Vf-G,ɬHbTC#@mV2RЪPNGQ w]KUj42iΔ'IѾB2m0|h+sDt&sj׮E<h72!UiFswB Ǥv j9uהh&$ YČKrFaNsQس$v;KUAxSro&릙ԋ7[ $D֜Ve(>L Ϡ̆!B Y|R0YϾm8u0tN'-a{mMb:5XGxՔ[{x xvIΑYǎz[PA(;\u+A [W&2,3āRʒcKDZJr6 2"IIY*"KF $Ah.T<Ui< zEc&sqg9k>f'z{uuyEJga[1dއT2.|a׳@Ϸ>2? UYlkMl}߱mT6 ^#ڝ3D#j y \1rUZz6i{!wE@hjvFXܱu"Gwq|V *ݠf`~0N_'̯o w{ǿ@[Z:!>JSIQ-]>m+ !?})qӄ|궮boU4_-(PZa dq6_^yokoL;V0iq:vuAtA%frLf%jPp^o P@oώ%:as'QqbډL9ѓĞt2iBH&qHS&6wx0I:Zs{n9߫y9'hZqJF+'}墴;PqƐ+ "{P=8ߛsh~ & O7xp]?]&4{랏 )$g+gGbAil_ϭxAijכIiX|U8|a;Skw4G;$~"4&rg8 ˣWŧjzǸ}_Q6,L)60^}x M-|9rvڧ1}>Qp#N1oh&(=|=PƼZDJnȳ.ځš&4ezfbhI^88@#nB:`3q)ًl*EB8+t G>*(S# eI{8+,ffzu  YvG1E<؃ͦT>ȑvDQ>{?S/2*8 96QF% M "Pr%PߠaׄRǮlT^hz_LcMV- 7:X4f^x659mϾ(Lj}fzf jrganiEkgr כZ`<0(J 'IZnr Փ= ^c?J ]Di Fܘ(-6ddt ͽGiɡPl00LkQZhiyCRXYZf۪gj婦~EOM5y¾uG _)} Nu6b_T{D 4:B%˻{X8 Ƴ[$^mj@ILf+l5n~ah:v(?[>6xygúW.1eVΉSDB]ktts8;N# hxl1?Gtz?d9ZidBx(2tsP O}, #Nog7 -+,Ybڛн xFGW wM964 8ehZjCE}X >`"2}9ଞv\ ENŪO\I%#ghҏŎЎ~?)E=xR(0'RtD l KnG¤Ra[//FYSW6 w[PuEfgAWK(*2OIOt-$H;9C 5|AfhZ?W"UC-)ȎssVH|5ybס@&Z ,`  Nn<4ZXłDJZ1xBBC2`qZXGDJTs-_M{Mi/?[])5ǡ P&U-_fO㨄B'<4 ,Qm!t6 -ni>H;q}am0g=QNҕ{h ƷnQb5^^_j9[-15Ԯ0rn9Ύ&.]yvc *VLHF+RЊ֠ ưqh|+djG>h8[;+I˛*e.xHRmO{29l%vY elCs= ÞiIqr__b`߰a a3 .Ӡ %|q "UI]OŲ=4A/6>C"b84_af*3rzfLr]௓IŏލWI@/s(5&DҬ)D8k/T> ԋ LOOaBXZREսH2G+/蚢^*n];r&6M2.N]ze Eںǻõ.~k&߇ zɖҖh4 iӷ/;6f :CsJN^.~=o]ܘ=ڠD4O)rSDmB;xSD*FP -Rg7ݎ!Ȓtʋ݋C~z{.^d6#[ u/:~ 0{5C-Y8 ^l|C(oje(v(QhiBu[e7[v$>!u:"9Bhӡ9ޥ=Yd҄>F&x˲shۦ蜊-DB};9;]:z4J]Heʜ1,͍|][οv^Fk-!4uea=DVrru UjNK>F҇u.KeQǔD1\C(h*2I(,tm* Sv,BTo r= gj^.% @q4=VQeҔ%\3ʒd$qB5df2ɸ#sDMvnٸeD76].2kFESj]s}Y7{5oZ^_dc-%%CjHR\i{yont2 ǃe</fNO=z\je/A4M^7@>8 02 g:Vx9s{w2ɗ.Ve>)vЈK cx "54*cHc&Tc0A0* NR9DxIRD>n@ty1'כbNV1* ُ')3GW ^X6Պ _?!G5o#pTpS+ZMzs܅[qTx`&&~mj}MX) U„B4 *VJ"%!*aQJYS& D @gZ8B'R~ '56| ~3E*$7CęHfb6=UuuUuu=e[WJnCh&~QA)ku7TX3 :g$q{}݅~h|LS m>*b2Eb6 0xglP$iqԟ/v0c~?]nU8g5}|?ЃwaAf<[ ~cHaGb)h,[W e]VIhٵ6 #"e狉,8O0Ƃ/&:}@=|<6'\1[ֈYq9 _8ya܁ed))8$2Y.>`(z /[y`YFF(bjJ+,@aU//G3֓gLr$ĩR1C4]c}]:qt\<<$4U5;.?Q*tVw;j-KFHPl"ϑ_.. !(b (1T!2o8R.\Y;2Efc6ͽV9Nhrh2Nks*CatjrZJYsPjV#F( -✙, PnP$dC[ }R`ǐ{L5줕 =夕y1' Ta]P>,/Gq.(^[wKWOwh*(^X;g)Ԏ0Xlk)-M[m#h\EG4~%aJR$C.?YS'T*W(pX[!!>YP4( dx*VRKW_( H{S_Κ D*eSǨrW|d).hJ%J4!!&cߣ*QbPFtRǨgwEn'ZV5!!&ɔŹ݂R1(#:cTn"uݒ -ݚ\Dwd)6BB(>/|D'^"NVoYqCU_Zt0q/ J|fݿ]wʣm'҇E?JLl- Y)PP&.}hn7?Em~v6""PmlP([.4[ǖΉgKq{<0.|aT'a%?;KvSSX%Ig /iu5 /GWA.grReY,X&_A sifas(xLb@j(`<{{an6.;o2f%!&.`X*㭐Ѡ{2 HEfԙ=YM"XT}FBzJ'qMssY(kM`֚b|*+F2jc?$~*wď>H_3$FH,v=$~fEU*\Z:%b{7Rgrv5Gג_q}Z`u(bJn/I^wn|隣BXᣫ=\18ץzT`c@S!<.(B 1BéQo6 SudTMr\gy R9D Q0k7>M<}bYxy3̻ܯηc=L:%6gNg9:Dۛ+- 9ݸilyhM>z;EaK7+_,X;PUƙ_, g#^G|T=,Se?7`\ljp&埏[ pypm/SZ_n=C;8&r}jl9eXŐ❌#= x^5O~p4DK|G՘! 2A"0b/Uckz-\ܸ@s܋ b!Q<)f%lwHU0_G6[byZ9ɓUȜ#i`N[XQ 1s?~FXv֍dw*G פz!<~] R `]:TT)bJ! ל/Hh#RF RE꣫A'p~ MlMm}Ds9^ڪ4jx2׳?3!M_O tHX;6b zȭC hakf3w B ȶLHcy浦{b1eK3 :"H6=b5!sJOasV[1Kv;f2Ìȧn0"cσ"oKPU|t,`gRBuk+J e fV#U\Q6h(+яQ*RuHj=(i& zi OY\! w:$?P؆|D~yU[V0LhereƁš3i g$ { \fm0$mR6ԡf~^|@SBS\QbDVj>#FpF|'H*d=T:DhwBD5gQtf`՛uWNpU曅r9HWBkV)~VO.'m\CmB>p7Xޕݭf^Yw`\"` ߼;}zp @>l!vŝͲXZrkXZ/G0hzrn<]}|/|?v?}x2Ug*h*>9Êg,b_N=.,h<[R =Œ`?lj{TOsgY <74%FrGE%#)-85Q`F =K8x%`JGͫ=غ|C`<_#mW I^2Bh ;EEƸ%ZLFLMQ0 ($,A眐 `eQx6(r>SY # b[h=WymхgepQEUEFy3c:-G+ -4Ը |{YjI46 L._i-n=V Fm*_U XXQGDgLz,OzPsT^pcsv RǬ@mc2—u(E} R}VI/1c>UXNĶ{&Hupuh!qcr&X"u\<׃|Jq{Ҷ+0=Ğ@XLHgGL!P#O: y3f~v,`ud&9QjV_۫11O q !C ȂulLFx`>D7f_?o%j73?Sxn^՟96$^Xȩ hW_g?_`o~W]]}7ݺŝl?vFt)|[/—^߇]C<o߃ z?F#L2[AHC;_!!z Wx 7*Y"!+L= H}40/ ,abܺROǘWJ$RdQlwi"UWKk‘rnE,p9p.:ٚ@$ Q.߿7aqOw+{,;Go@t "!d~G"C b3y!\RQFn'l5߹uCwp|}wRo.@o3[pwW7(ήg[6s[@sk3\Q;!e:qѪw۶έ*ζk )@c!cl""ls?+sixD "Y̴wm"Ρr4 S%642XZq @]ΕRP\38وѷ<&X T"IE!#Fab}P~Sms"dE'@v| zȦ(:U9gJz;q.Úmn~M], {xP%O?Y&R_Zn-=>ۓ3.ߣx )D0̨K;2YloŽ# ;xI̶, RX! \ggXCج 25%L: fi 6l`62$(AB K{2F1!='Cݜ&\ 8gdkJ@sX x ꮠW1DL$91 GhJdQ: 6 RD1bڐezꏥ%A=#?Z胳))$Ld%Um3N(M&o1t6 ͖ۅisíԘ.UC..4`;xf٥}?KC2];zv{Ewq}3 _A[.efA'=e`(/V*WEugsuw]~uKsEEPQ.Tf%EքO ;-tk֠>9YBG9\4 ].eq4woZ,-|Ic6?-S`! VZ*0k$өI\ T*og흱_V?ǹg*(RT񃓊ꑜ1ߧhgٳgf÷$W{MdO9bi'dr Xը}+}!ZWK;h@׵@(E5bN)^50p#a‡"ٙ6U6<\1a69p;H .}iFxKЫҢUbt%9GUY-u+9:FTѵ6x%>Huŭ4^ŹjjӋW ~}ٞt-;HZ ys㯳+ٺ y\FZLc~dZ[NJ0MHyʧGR롣Z)SK!w\ܐ9-VTKuyy*σZ4|Ly<1S<Y>5N=uggͻ#wvˏ{Ȋ!H |_1@,fc (Zmh˵/~NA+3hSE2N~)A>o`'Z>"jH(&RzD8u]V^Pk"~{˜4g;uf`S Cmpgcl?1mCxX5>׊pDظ")smbZ[E? ZQO>?s& ->'JMaGXm*#7.ᠨZ]mQ$h)]zT 7[]4⃳r/?zrd$h#%[M5dn_#'~n»8h ]ua큏Q51D ~S+_( ]Wr$Q$/AM{JGQ_/GI"TBܾBJ͵bJCD5%hݮѲβ/iAF+g4 h1\ ((,ZK%&C; )?B1 HG9Wf d`U p#mwXiO0)4&X~X/ߜ.xK k5 ]JmuVO4a!eDr{ =ӞsMb>Rm )uv<@hWGκ`;YZx0/ t[ҕxjgd})P*!:՞hċG>(waRZ2DHƳo(T s9)ETe-ȉ  ⣘p&A8Wg %,; q{9*a#C:HUT0's ]-c# f͝.YL!-˘̏7k ']6#gK~`A,gvT 2 n*81u֚HuogFƥo.x-ɹ_epV/] *.Y6Tfxw)<7]dhP 6$()j˩FB5P\S}naAM+BM!]ՙq{&2|YWF>d6M#ŌHvJ0V #u`B.{WEK5arE`BXh/t+90X#Aux1BS麐"* s230ͣUxOH<quwMh|Puͳ9[?~h@s* nYNbx)SQx}1aJp!brYXp we7'%R} 6m=Zs}apTIح~H) WT89x25S{>'0xq9Z.t7\X_.Y"|1=e'#Lu1*yN<!av-1 8cA*+wz՘r~9䔢7.RW5ɷSk`5af7˓gj`$[9,tτpjvt,SZjcQhS⟦zNO.ι[ TY5.Xzi^TG`neZv NV쟮r3*xqJ>W#p !٥xю}wk>J8^;~d˱uZ8!p~%^K6GGy%v uvNͩSu4cB);[dg]KNSBs  1x ;{FKBѸ)UK̙dv["<X<z7Xl2RwK|%Jl3BԧNSQyn7's- 0Jc? ^/}sRIz6aҋ6$lx}ݜ&=Ƌ7^ J=" -||k'0zMJ# c|ۋLkb~|s3g1ZpCn_ߘf\Yl@x1k0>"Px#}xt2eMwك~wwtS:L3Zo޷ӂWE4nYt|XЦƼiHGǓ-.iVԌzG%d^ܟڼEGyHZ1mV0%sxZ,K՝EHv||7O?TʉYE2jofZT!#GpS/u8B$%@wP,kBJ#j] H׌S! Gj#Ci6bQ0͋H,tZX!,5]WO~)VC@Gl u^Vt?@lwU@TĈ˳*@wQ8'i=$e\]Fhf.YA rbszkYy6'Hz={/RDƃnQ\1Wďm0BC(#(D&Z=E]c¦I8f(LB"8dhJk/.# !,f:-~{돾+]3h)\OOlt0OM(\81F|hcE\_$Vlk?# ;}/T((+P ;N*7XnAa`A;_]\$mnU˓/.Z?a%2h;C TcΝ'W ~m)9`f+n &d*?_ d쐭ŏۥCeIg忁j>̂kf.@,Nk' R$ >d$udP#싶8l$Kvz:0G2O&F ;3Wf6Zz74hhvŒUpd,! * R2=O &,0\GV RGJqon}5@ ݱ̧ ƶEL&LvY,#ھNѤ>Gm~"M(:!c42ݍ5*aFy8` ~8g|U8N+Xp8TMo,T)/ OxsĘ嚺2-#Q4 l8t~VB4_`)ۯ|`} `h2o:8伟ۑVeZ8_݀y[u\ӯnFT%GarkG\ E@7-I<]R~,٠.6Kqha027v}_XWCYBXg2AJV2Idޡi󵥦LǶ#̕P޿+vfQJv_sA0=E}UyJ^n&貫7+Ѝ}}Уs͊XWī{JS㯨`y[B͎Ir˛#ů3 w&VE3Q/]QjqXY ֶoDk osDN<߶#t'狊~E:C V16q9ʏ"vDg+[כ)&LR[MF\UFs4N#Yw SؑnꝢPw<[o{Sw)S0.\۬=r2:`ft{E΅&6_7O*ew{w)JI#Q"Sģ#B Kb^xcШÝG;Bt%:mpw?v)B+Rզqڂ \ ?NŮ r?4@9X%5QgR;%5iGǞ֤&Ʈ)ؾҮ*y|r0im;ʧ3l@9pjRcR;ẖ#D9V܂نiF"L)dAfljfEփ|k‘6ql>~|AQe59Dy{)njR |Ƕf! fÀ}}k"J_F=Jɻ/LrX lPqU(Ys?C )#@סvP/ ;܀ۏ0XV!(F <~s6__Pcp;f%W?~P>Śqe0!!Lj"" 8PRmtvDle'vO D4ƾqD"@41%8tAA`U^tce =8ҮBDHr~X^> ڟ?'I4,S-QH eֶX mMSG`.d wF6fAZdqS3wt/ dzO$^4grC[iBNo8. >C3;4ޯI 4~zKtSYhNtA> /E4\;\=ĂڗKK+Rw*yϤ$"{P4fh=@Deij AL)I>=9E 9v)fD3Ju״3SG^ 3EaHT::WTbe^< z`fG-M|l|O:",XXkG{ׄ4c &/qrA[6x3fnPX\y( Wڦ\bc~  ͏Yt@ǧ@^aY~Φ}կ`a'Y>skxnmϭM',&ȧLGG+Ex:4h#dDQ3D!x|Ȅqs|/I5״ ϧC ~6XMtY֧bu4nymfXSY ߘr.L=iTh߽{2z?4aq\[p^S5J̥b]䷧HlDVGeZj /ք:0OYǚiz-И:f*Vv +8-3sud?잭S:L67xٻR{[ iV{&C(#( (ă/3,eFQ&8ʀ/>X",%3Vo$L 6YDqJkXC0)a1b-]bsl0L宔%6Ą$ 0&2` I``$!D3J91Y J(gdc@$滞{ ;G{eU^ZNS†ذ=U.ȸ][kl؝jAv 3+^ Zϔ/2Hb_!7/oLF`he "|D ">}݂}r¼`/{ُNN?Ξ-p[wh[J`f .'JQA1b|XЦJj듒k?Y*<0s>S4lϕrs]Xspp0s;$^OtFKp60tCt ґ~dH:A0LX-hW9;C6*SCw=؝}4z8,6썂Eo&ZH/Gvx1_y8q?KO[|i{ПÛ 5&og hΤqD_V|ot&uG]x-.z8`Mxȃ)F kwl8꣡gP5 66hA|w];T?tji8`ݱ/8'^Sbg&{L7+)6vfkm1Z\~8ֈJ$jOܘ)OXN#̕ F@"̖ F_̯&T42.Krϐ9]Bh2o\e>Tg3kVbl/LiEyZMM`eҠ?M&RqBVnu|}T\e.=շNFޚ{ҁ]٥Sd?t;G3&@ݤxX.C KIϰ81M t#YMxۣe]MMRtq~ht "AD"Bxh(Hf`aEH0("13>/)?v&t>/bDM쒦;T<$3 I . 7T5D(Lx(h DMHiwǚriǜ[TIe55UսC%܏#If4F\s)Jy4}3)c6e;S " Z*d4A B XQUB¡ enO$hP&yFJsBk^VEqpPYS_}IeD f&%|I$ bFSP'`<}JL摐A4mQI n$gƧ/TD- f` Pi ( ~HPIB!9 k*H ^08*!1 šjDKyJ0E 87!=QZr6/xἂx}"MtE#pSr2? D5D-)g`8BƮ0wju"rJ;˒%kza1e j+v'DŚPwvH(oś=̰̑RmŜdj~):_'GŅڨҦN^i&CxfƑ1iz )j :W)J\L[(wYN"FE,]fx} Ƴ't^cޅ5nqzK8G%w^٫ߚ>-w]==& \񦫸>Kn}iT C<^ѩ[ 5TPNP );NsuwTB-:VYdܞ>S=܄KFB vh>( pIW .8D`JEjI&DҜv8ld"`aScb71e+"[j+i/$fz0i|ҜE5&Lr4g)]B$ϡBIOI7u/d|͖\,y??W;1Mߪ>(]cyBMzWG`IZ;xݍ%;I4W.O?ߩPsE=\1>NQt0PګpJzViXXyn$EtZ)%{ChQ-%H"ޏ!^&]vͥΦǿtѬ-PoYV'Ґzyiՠz%֝g5'$g AhdiAS.8#VGiv$\uH"P Ĵ>lO"#4` ĵ8 ^ {kg B~k!hG DVk&bRD k dGt߁ Ȗue]HȁhLI6y÷ݨ"H +RF4xΌ tf:"xvTNA GtJhƌ"u[#ݺ"UGnC{f(QFIQ,ќ(k˃! 29?A8-'0?}S}g5Cp"a &`GA3 J@2; "ǴjJ+-7F_JicJh㿌O,(/t!!.%2=-Fq:wDVJ)]vѻ`ڭ\օɔǹDf9SF6fAՀňj.$EHĞUwz)sItQK!J`B &X-$FN"҄22#JQb_ 3Lp9O7b0 #dd0"'z! u,>9ނZ(69ڶt6jP!YطCP~=!BB\Dd[-}RdNxv,ڻv+E4[u !.%2k7Kqnk7')AJE`Jnfk$EH؞ڭPkjA5 ^r`N!2xv?KN:*z]v|^GyCcX}7t<Ү>C缡 9cFT>|ʑ}&pFy$ NBo **z{Tn;C pGxTELz.MZrϭȟ.s}ڇ=VO.O>/tqA8krZ}')twiv?ŷyG,J/T6{9|^6Mg~cXU7P)KZY&{y%5ۼKFaKj;`I!ݳD׻ŒQfuGKqs;W)DJ'ÖNv gc2"W^]zNv@եuɒN{Q^Ty [$m;aDd#BdB.qGlajtthW$b thWd=6]%r)K5yi |vRlc9aD+Q 5ŷ:W(g;rǔ RkT$:blF06]";ع|kZw?ʺe4ֵE&PޥvEWu*yY5.Z w}h"͸x:g+~ZFyO?|i# x5D|QMw8RrNj]jU=ۧT=>929nn50x)7 Q#iQ:CBb8v ,Q;z= K{gmrc,"Ea(|ɿhez0Hҽ]sy JJNz2b3ůNHoG'J3)s%{q"[uh"]M XD'n:؉"`$fSU?D-I5_{ @R1_䍢RPQl#轳~Ԏ'Z _@L)S9\CW<7\FV c}K tC䫎'>9N;oЬR)Uū3ǹ^(9xV$U'):UKvy-V)CPF:Oj4xQ+TDٿ6+AfUò{҅buaٙ/^|C:H4WHIZY٣3W)*qݖTK0o43QϽ@39W,0L4YORFk &*ԫm‡ ~^5ohYU15,B*ӧDíva{3:ٍ9n=i]%1NGFQo"Қ4ɘc}43mf7jE_/IY{l7SHDSʉpk72nz!HHfJgQHe{$2C$J}RÅ88mfy(5B0b C) X3=M%h  @VX[D ȈӀ29q tBPIPrBaۿv%cmZӧXAD\ jobP6Eu-SKk⣻X~jOw68aL5J Ϲ4틜ErEgz1$),=fs=6pe>1pSKuHRCLx\tc 0O-t`H_P.7$:U]>L+icq`E,?˔B)I`IJ`I=u8BAaz1f<`"}*it@5G‘ .0cUS@W\X\܊gQqYdDza aF?A("՘u 8H5GP<5Flb- ybC%'(0"( OâĨ zM֯>/E," ڽE\.qBDQl>`G`=X1)(qTWI0'~\xIQIt⤤hb(-1i[HmE Xx7LG5?ڇfFzL,`S.p4/@y ;$hJĈrth-`TB`ipQAK&@d|VFЎx-Ȩ qKF I8?{WƑ _R݂>Ӫ'wuXeռRAT{z .J%b鞞]'Lն 3^QY[|g7Wu\,?pńy1O[;pbNL37 S@8Umb w778Tfn8 PMHh/6V?4ۚ1 EE'{5zMhwC9y(yH|A"CQ!hxj BH5_nBD!kA =`tL B+jx*(ކ3GFI$KÈRL{W?Y8ڠ( oᄦD+ʸ@Ջ$Ag06JM88.`bT*snA_}`LlļT@yEFWJ-iReU{_5w HPDhH2a@-Nhk#R%s+&)t*o¯:`PJ0E#GD^of1]P2ÂCy &0D(:EPO ĐPv3bi2q ǧ#לih PaXE -Qmݘh.A0 X,-7wLo.&ZpP2E/ K[/ɁҜ#o'#w7?}۬ޱ9`p-F\5+BSS#{tjm)ɗ\*> jl *B_ZdjlP818-l+hR+lV5JlTS4ޓo> X߆[oַ8ƳM9h؝18x) ~Y Fԣ˭XD1q+Ѧ@ hBŁa>]%嚏1 {-t @7Iiv6U hÑ8=f0bAfcI`*$:. H\fWEzqS WMn,*{;W޳nvR2Nwۻ?䗋OKrs\Ov{E%};rU(p}tz}\d/[d/шZȞM9N{P:I ] @E&"UvsYڑWwsV2cfW,dr*Y+|+XP7/=u]۫X,l^~}Y:MҪ~1Q+0ܜ I_hReĖWf4R|n5wX}0+%MV[$|`ax2ymoMx3t<鹼+[ƛmoS^( =wy*{.q˚홛Q={zpCMY)>zyZD?oZnF&G4%1 ={"Ձߢ 6:cO~<?5;;=JDu; >p. 2lhwO H5Xsޏl):{ j}~pQT\}dYD D O?‘rVi?9;' PB#л@1rbf鼑Y(<8{]D$Ԝ=?^GxGHuc%!lYM%P#ur&G-1FneL,*F8I<^Ƕxr>xg8yZGLT&qmCR & *7:ʙ޷Փ|ZAÁ! K8ZҢS~woD~K@tLDnB3ep*Y}H5D)Yyт%9P*J쁕Mt&Xzl6+pA ˃Z,t <򠶲0>i4n/,)vKr UX=_2HDO{p&qښIW,}/ tJMrgmJz{Ĉ (H"0#ƶ̆XLR |G/@Lmdc8m6-[ÍgEfLCR3$]6+ 8jڽ+Ųc=Y4\!阄vtǡg[x'z]* ygwQ@igRČ"TK#$i`qm<mh;*tّyiYߛg,cbv#Zg˭<爎!T; R2M{fTsJQ+ H`\t Y Ɣ88aըN"u&@fNڄ'kIVÕE3x'PMD} MH¥M=H4T!U?ftv>'ĐcTBmf7C7Q||PL:l ǫi~xՐWjK\#˖{`S*``?~ ֽFـfJ$t ՝a9PwE%vGS!+][&Дٌce jD$ 6XpgX b2 J;McLj=oN'>DಀO|ǁ!+o%ʂyg2SW@F*3?[:^?Vo0?嶫}@;&"Ջ嗪gãJn_hat`6O>+ a͛,+$.wlg: w݊F\\^<&,r_;u7v8o=~8l)uaDRB6ciJ鈧+n7oqM &I'+!q[}ZxJVa| Jo$xo^ Qg D# 1|᳌ოr^zp՟p)Y՗x@{yaI6%.I! n.)PZӯ|iX:07D5~@5ǘ,I@ 9Zi{@1$Mh!yMpx,Xa<8ˬՑ"<^`Z(U.Jo]ЮkLVtuVrkjSns!T O٢70֌ ܀ߢ N/wIFl4v::[Ɉm!V_ͥ}A Sʀ%?9{un履9% %6^!Č oMׇ/#tvEp9UJf+ wnKWSI B{?(ݫFwKh'3Fӏ?}m-asb'Vm %t.-ojSGjuz_U#I*1=0o5}[ݓ!T k &Y=nH:s׍95ltRnjصY2" t~Gsh-> %,>kG+9Q<HzE$$m"SZ>qX?HSh2Qn`J0SxOЃdT呜in|qu JQTZkH3()-A)x L2|uxS=f.$1 yE%R'yf`7dNz)<{4_;;:3:P q[9(?{Dn'$B$JbpPb~Rx3LL(u@'9<@ !P#6>ABDΦL ;4a>*cNr[BS?6ػFn,W?%@v;`z2/EEJ+N $˖lb H,\$s%9`^W0Ebb6TZ*VZ#iByh ZֱJ8<3ϵb~ۺ&&)Y! !oHUOI\ʯoÈ#p*Yr< $Qd"N" 'j/NLV.q,*iGװOՃ HW\$ :P'x 1ߢE3^]]yi%Fm7 B 'qTz[2%:Cm"O[ERWسkIUZbCd27*Tm\tPwI+vlcjX-!H1*.^؂DjI +z% c3E-(h2~iަr&GusI$}>:v4Ϫ WkgMVͦUeNC`h BLqw%.aejV48C cq}tU5XGf 䝁gb6 W`y=tُOE43ϻz~ǃ៫qcu"N{ 'hlL=bC VKH4?л ܏D[Er9snDG4ʺ*Pl%b h!AMͼ"2o4+40WRNb=?IN!ngw۫#Yaւ!JI+5yAj24 IzwDZ*F<S$`jTIխݙ{ItUIԏƢ@h ˇ, ! C[4‘Fk_ev7c Xd0Cˊy'GccfUpvACi|YQ ;&]}.[WWf xw/6<-St<ڻx`QLHFq'ip4k1ۇfC j;0<&dq`")v̘1Tneɹ},!^!ji`p=Dmv4ϓmHmNC0)Hy=hw'!C##ӣ:=@ ٰCSz V}@%`0\!;KcsS(dpS?݂ @dAIqڰXu+oCu >E;l}v E)BLN::HG2uOz=<{T"Rj5E =8tāi1[;Oj㌵ t4eI%.LRf%cm2-K%.&IFiU"g/lbqʂۮUZY̛k WaSg ֛7lyY:No>-﨤.7MZo^1t޼{ʭ<[o~ܼyŚ3X]HMMY R$@1yAx!🔗X{27}|~;7X+y~\b0:WX!/V|?b\b8?WX!!/&DwZ߬>l7wI~@1GOr)+jԖ#pU|ˍ Kx'0ׅU d☞KJ̉ !8(c҃1. + Z'JkH܎0u>ڼ5B-?K5GܕR`^pD$!;½H%; q_Kؔl~O =5l>T}Lr;laK`p{o&f xnߑ|]I-w'of6eU6!Me9!h, Y׵m큶|,t_5h&O[{!َ7g-9Oą+ͷ~xwfi s=@z?'o\<+°m Gg-┴e =`X7}[|A 9]BuDTXn~ޕl qGM؎@/j'f~&H\F]2¯ջ T&Vœo& HM CzM:ܥDb`3oDpP?ߔLvpD?[@ux8#(c& e¼]WBwu_p*1]%e)xx,8rp;;tYk&%`[ѩhނ5#&IT liCkaZ|ն!z'Op9"}mn.b-2k?k q[UX%k 5 \ͻdzƨMv5()OTg xY;ۙZ58k[XV5XWi+ MuU:V,D(?_+sm侨13mrWlL")nRzSm c~y}Zm ODž~)W[!g΢Ay_ɚOb0q0+cT!ovS-9T-wf-y9=$J:ì9:@G]+-?RHqΗ1%A2[VG(Vʨ [$SDbJlGj9g+lE&U=1%Ak$FWH"MUɹ %xJՙ^>0 企D]&(ǘs9$HFc`:Q1sLI Ř"`c9S(Rht1fIB9ƜcQ%S2c1樒>(]Y2i1sLI`DZRc9U_Yrr9ǘJR/9A9ƜcQ%353(ǘs9$h:s1r9ǘcJ #蹶?ks9ǘJfz|1fIzTcgcŘds1Ǖ;JcV 9|1fE$_YY:ǘ"Ƭ=f9Ɯc1%A؟5qǜcgcTŘaXc9Uң1+51c( Ƙs1Gس.Ƭ0E8ǘs$І竏r=]ΧK@լ>Vstد./=k7w^\5kSP$ OZǬMu\ B2Ƥ` 6D ϜbUn#YC+*<~KEt<|d%QBKP_# *kh eJQ  RQEkTS#+YIDJq1"P, D)ʀBX6Zax`t!;jq+-+xO0=u B^"J*%#g)Cx$UbcV8kHL0ax!XJIYH̕ރ`Z'`PSiҭ w4\:=yCH⊣ g, fEV{ L I3 va/KŌ8&ul|ŕB.}yݕXTH0T1S%)lpFV m @Kw'xf:. 5pM>Ɂ')3$yCO}hwZ"v4Yq"Q M|efyw'f6vB_12 MjA~A.^]nxK?nK=Dz?WG~|d:[Wp0:U+O)/gh-#0/a>iƖ41X=|ssOTIP$g;3s/y3jTC~$.x OT#Y6## F 4iPYEg  ШuQ_Z"zTt3/R@3S3}p{ ?ucoS[o7!p>+b'fz?`^ n8A$oˋ/>;&{=Mip1{.ͦ_|pCw(Xf6ie!$`4 %[npzR\fWW0uƅw)74з)]|:v>;;̀׳3Ld/LIήήS̰3s;0(?D4ū Aٱp^d]r/FIm_=]\MyMX P A{(zc$\{K"Y,M2[a OL ֬fx)3X]RqrD`A'H[dQ0FFG`!M$ƱG-Ckwvws;dt.0 I"Ynu.WLZ9+3҇'2 G1UP0]aRʒs[g啸AԜ7C# GFiBk hRa˱ZxjP)995bj%N_ dJgr=Ɲ鞏T|hAuHҘ4f1#su'k$ w@|TXq\<% -ZJ}S*Bs!)'0čĞ`tT!/_Z*_p?v3\W װ;cM1*~k wPmLjlJV$ {P[8Ŀ*h?hgk֍T6UD-}>U6U@5IV|,$uoj JPY.DdhG>ʴᑤ)ђ(`Qz*F-N1!aM#L : AbI AKBd֎&ͻ9N, /.|A!]av;Ln$kQW{2HLAMzc))^&j0M>Y(8auaE$-܀AQ`5b;efuhg%$4 d0~)y;~g$:Ό?RT:;߇EtpN3ߙPy[$ܩ H ئB_6 S=O esQY:Y?E8Yݛn?CmI0>7X@#JGU,q,FϲoS r@to'YavʛZ B[dw.ۇ1GQֈzlFh͆?Z7KZIBqmtPso*#ӺR #i~NԯH^^wnU&cT9TtOjW-!tWRxPXx^EpyiYpXr/$bP G{PBJxljG-5I}љv "}v^gY`YI5\VE^ބ=* C;_싙NW"!y P, SYvSJٙR-lhrIɔ&H(^'\mS tQFEQYb*[:T_;S>V8?1],A6)=|8qUͻkWcؚHFaՋoejʣ>|u? ~Hx_fXyr '-SAg2IPO+ gb4Eh{VO76df'Ʈ3ccz׻w?Kɲ8;N_e}JprvH.JœվKIQ>zQl8kO OlL4ӌ8ån3;͔G_$%蓍U);=I8*G uZg'9CR;}lF !,'QUIeVp y:QFDƍfX/ lEB!Tkb>HK3NRBQE߻ i&ش`JYhԖG7XU@XV75UV䤂v!ISN1/z3 Lȋ"hjymmL*nqJ{PS0Ne~c^ O&oo ,^?7Y/Ա&1891vXAVHb'\ !V6dRᐹݭE3Y'{[4uҿݹ'9.QO&4Kr#O.ً>Qi>B/kTiK'BEПsA!H3)ԮثhOG&f; ^wp2)eR˲2,&qE& uo룖RuD{iQNHᑵ1aqB:$j^zzYhAGgbچӤ[n]_kґ>&eTH]_MXќ[]7!UV/)ɜ&sϰrQ-0A,+3@L#6(#|l c`d&@dY,HG|j6[}@@aZRS2XzHL,(V3ʵ <\F#Jֲ 8V퀖M+Znc}q d 1p#fQF+ ^rL)6*$1pr5IݝR4W\MJtiP~}Y-b\ R87&PxgRR̂[SeiXJ-zzz#8~ #@s bTie+ + )  l%*N(Ҍ3pc>{%ژ;ҚÅ#)8:fCsQON].'g \N,.8ۚԇ-8ٝGT4d__ WӍZSҊ~Q_֨҂:4N.2\&eLpYVriEx#FI4XBaꌒXpe"ÈQ1a4HbDviD*U頙)cA2ȱ9P*ͩ&D~\]* 2߳%;Gf5VZ%Ou9> "DwuA}yY7?v i]Ip!Kz![ga4t|RJUǡvi}U}Q1$jGځI6{v`J?QYdH/&t׬jHc>~L!sVuM"ӵ!P/+Xea◬*LE`LT!0#5ӂc7 L)$t^O/~qpkrƼPSuRg H\"T". wZ ;L6@/f+Nmׅd"M%lP 33!PDwdP:<,(^ `>&JhGJ:|]2;RgZw^^h!zdu*ؠ.IvirHk1M[ߜUk V@-3f7/Y7$ɅDGƤo{ j4[= KIK>]fTwh*Q Vk+p]qJ@~[ -s*oIg>!`_v;צqj>wf$*sٗT)iDN9gA)=7)-`A\`jCs2TP'.Z*ARÈ }1 ƅ-so}"1:1J83$ m("i&FSTpQpE(U@/Fjc&l,>J`lvNzZy[ 6UOVPl- p}G :64!N{8j1~Q˺u&Y} E+pTbF>2Bǯ&U+{t%+dVE빌FE3 #6u>'ӳ7av =S\8{CNbcq6Sp3~^01L'+{km$%藃rS7Jfg`gvO@O;k;3sSu\{<@;#H(qMǏG-7mu2>imҥy@~pTЁ-\Z)jz[C:} wH8?ni,)Y9]fTC! haN$%u'4Or̀.]_\f cPϡ&; Q ~f&o Y9K3m,!y[g'ђtrHǷ߽zwx3NJ_kC}{z );\ɶP,GDpA"ciB2R3Q"7&=?}{T,,HϢW8t #kI`gÛ]*tskBA5wU𭳄>m2gPs/YHN6VRN;a/I6;`jFNɾ~N fo GuH>+=cF:Fjaȳq6 S >ѩ9>4$() ϧկoʋrٍ>5Z8`^qFJ^rV|ъ IXݷD‰iiN3LIf::̜s9r'<,o`W\ߵs=+9+Kˍjay`K-qOHE[jXhEK2BaO q v0tJ@飽35jIj+d =Wo VH ",ceC6yVVg 1+p;ߥ!Fm5k9 Nنq Vi<[ɑDBnvBu:Euڦ `I{#iTήy'DPXeu!sM`(As\6yZ癁:<#芔l ]rFF*J;c,l"੒XuO2`-;31$Y$j6jhPwu:"ZF *l-M *@# sRQ&:&,ϛ\SXYf1|$ό͡1FkZ V\V~\mriG-$\6Ph@# \hхvQSfV+^c&K׏/WFi]3ݎj`e*e]zr<NY>\6Z6ڷ큭& 7VޓM緓2\?X;a9z/x TH u~pQe[q\ WɿkO՘1JOw$3Kղctu;0AgAӹ?R W.+u>.jֺDK7h[Y]!і1 :nQΠxTƇH3y[&֢0 |akYO։cּu\YC>=C0Ѧ (' |A-;2$Bs XbdCxb<ЀR[aIhXL1xE[qV_vuA.%XLqHWP.dZګ6<\ܹ8U:WzSVhl*JHUIB58*s'GYO给úӇo$4CxHLJQW'0X4X-k6g 1vg\?wJ M V`5A+erP`N^r($[GytBJQGB2E+^" տ5?lB}ټ`>=^GauPk;p |xzsz,x5?י>ɉBYLϿZ2˟cy>H̪Eҵ~Jr"qn'Ho_=|3}*ԿNg׆c^km 4<|J|OUo8F2g9Dhfp}19gD Am[H% k錥eo/L9~X*> R 4K;!`54KJrXBj"F+R{2RXvϘFZ\ d @F0NqL?T(P]4,2fQJ-1hhBu,Ffe殬 %Y+oiFX)dGǺy }Ñ׋ F,ƇT,4d=VC*epEOh*h+;{<{ƦcѴB6A[ݻw5v>Őcހ<,= g12,t{<'֛є{,;hʻH9䝞@TB5Waa$-ZvMEhySJ8 !mA2W򔒴^+-\*7`% =ĴG1w&D9H,DR6d\˓$RvTcPt9P x#j )Xjk2^rw8>flxgLǿ`%~TSh¬Tk6㰠?ݯ29kR=ݢmDA *wWe#2[>u)x2,hAzJfn˖ר^2,mMbPљVq1ӲljW!ƠZhd8(a*/[5 k^nWgsE\ڬH2Y=4yd0K,nZWOy1_ r@Ye;gi7)zfNl6_)3WN h| ܍?njڳԗby/{U?[/J͆q' $=3$Ol|J8ey0Hh@S_^ .&'Ot_}I&<%|QZv:Ezx̞&VX8GenFgYOײpC5&zi8V(:yJDQ f(L/tc&9& Ncڥ%V ˑ)DcB6v0a=,| kLJ:M4VvvclL8n' ɮ|B -tT5oARߚ2A ^NO㠱$f<=}7%B@y7h\J2z2^c:1'{L_Lǰ0r vT] }cIC,3 WǢ"{{,:d @M@Hԋ(#m-Ow/b;}3İ@_ybڗwYoj-IK&{^/i2U"K.%D qObjɸ]36oǮ$ݶ95oM4n [a7;GBnPS:IAfD>d,7\tw6[ŠWg֊w6ΖyB#PG5 7MQ'O- k?Ju'E5^(Ųi@A6I*6}U&Grf!|ҕ4ץ.u|Tӕ6F9N,$ދ$syt)y@LM=ZLʶJW|K:dMi% hQ{Fd(U[׵n:gVVm HLf$9"sm"Q&AP4ɢ>dmLކFIkAHhzLbv˵ްz aV H;r[19GCn,$Jkz t[X#w+[hD ^~5ouRoZkcj֦}|~? tWl{?K:Λbɶ#8 KȞs4Y͖6+EpU.*g|(on_dc 6KȖ>[~@ |aMCyY+dJ)l{yCUɩԙ@No5.B8o6s\ zh3ljP t3Q+xu ?-A!sSQo@d4dDH^G6 ~:ighb,r'lb)@P҂/Cg必h:s?/oMn@Lb%7Rm2-Ps5EbtZ4䤆kAW˓cG: 7P6pIm+O0v_9hMՒs lMB jgAv:qyn%cz,4UL,x?)K^"e^̋\yj^Zj&T.e(ddcSJ.ien]?{6oE~޼@b n:vKO_RrNY$'.zqdq<3Cr8*JRI  VqHSͪ6T)լj'ItاgPÄEZ#AeJc&eL`,‚+B@RP  2`(a:6G$b$n3Gu,m&6XFi P\I"dQQBcBI*9R45m04I*`I!D4YYf"0BR6'F< _V:e:[&+;ʎί'qr*@@@uCظ3en4Bd bI?-]/!'g%1AsS6 2Q  7'!ÿ$@X*~0׃:}c!#P@3 e!`ӖU-lMVG)˖5b2k_cA=Wkz FNge(z/X˦R|< @b2FhOxghƻ60 TaBg)QXa@ 1&HR #UT~X&X"C˜[M6˪mG,9`bMآLHB*E$2M90 AB$LC!5! "L#Bl Cl!˘2ƆW3䬶B SD0Bc^G aD)J8RV8DQzu p׹o"f(1H0RT%IDUVNJČđXec:P@))GcC Yb*)8e8B#F4fiHxb$ q,Q8AZIDm˦\q'Ԁ"""%@q0±)(POg T7b^K%[FSBCL#lO3 }m+dKHH@F-@H{C6 pK.q!\n CxB!KȠNS*DO/(/l8Sr]jqT;$sev+ Ԙ$8֋|H7 ԰v p&ڙA)_Z̥ 2Y2+6KG0`zP˱ݩwzZSBwncjuuW,$6ͪHz4h)}UQ uPIEJ(I vARVVz' -hT:ГA ;5P?b2NO@.;Gܼ׿%O`-Xϣ"U+/eBQL߽4w/7頋O]8m؝ __S2ӯ0.ֻq1_{`~bd{AW>C+D\]f܊\'Cb:F桭AoYj*pS{P bىIbl$m I?j}dw8}iRSqMwwh\˒^%N*u)^clyielAgGPR_-KN0}Ifk9(*LMSk Tc-߷[5( !} K=ŌǑqܯ(S5Gl^ZX%Ǩ ]ҩ݊M"1uLa}JX|+ ?5Zqފ=m*5ڿ}s<+`=kp&yQͿ`ݯ:` r4 M;UVgonjr2f2$N2;mnH[.5 Z5uF^Bޘ~:eK.7xt^ʺH^᪮Nt>a@>)xBnj]t-׌^y`hጿ dNҺsQs]8☟ntR~QVmd |CbC]!K +a xۥUu# c]D {cLrh_&L Yu]P!uR.x&c=FWpIJKҲ?rX"@HBW_**Vń'a0IPs%(Di*LD8NpSڂ zEA"ʔ4VLi< %\(CHqX<$ RHvR1gy& 5_*%+0R$yfhg=Eq~Aͅ{v@%'·iϿ_QHP2&>M @1 n.k i67|Y^0S`ܡ-'gLZtl]٘ǛE+ ?\&8![-0MO0~aVt!ջ(fnim(o򲲝 orC%|,k}^R#Te3a/jQoWRxig;kl0\az Ջ*}d{nBD>7idRW!|T2~h)0YoIr% xmXb VH[:z[Zi%Вu&b)M{Э:Gݺ5Z ~5;mDl2r g@k+f=Km Ԁ o"àœe_5&jzcb;w},5,,4c G,1B=Ax)^9p?7u jOV"UqNmTxU͌{\{vZo6bq*m6 ӷp34nJq$bo/[N(\B6 עMqT36j08VL r Nzٍ{L(o6oQ?"4}SPN|B"tc:ZB(zZo:WHOWSo74 _ZgH=H6FY}Lt?X>x>.!ʞp|p~ V0œkD0<()'>?4Aħa=YI\IBN3is[ݔ`,761ŬN'e͎eece1zO cz0b1znƈ0dU.r!nr1rA6rMx̲P3Ǿ4y&\b.67UeY1SSfY]ا>y¬>ړd^r y*zNq 8-(=TFvYE?dF@DaS|5fz(cd-6drŎaxl:_XDm=7D=_CkĽqo7'JwjWՌ@#\擵WSUB(㋓!vdНGfGj:鯼Ar:1X$,s*1.@[mXHcX>VO?l`Ef Ζ@Ify+{pIދ^ޒ_qoyjUVv?z5s}JNF }rgK5늧X|RI } uzZf rْHG9<CZYO| d1TaSŒ{-t{K<%Gx{xTbD /P7u߻iA'sݜOD@!wT5-\Ő/پgW;NXLxbM@ӺD($(bܳ%߅s5,kw][ #sQT•~,]S/%9 /vAL,a(ԭܳĨk[B~g%Y|2f.Y4,uGla1qV: S%n[!u2v "_N%shSEa)Mwu F;b(s`ʥI!!FIdI("#( yDM @1 n.k i67|Y7@k-~ڽ?{csd:tOR@ gyj*=A`';֋]-#?Wo2~hV1zS%=Ѝ}=*2 kg~ʮ+ǬWl-plu]1,} wPlJ^qWDNn(i+Սx!DLd&3x t=+'*\U6jPEjP Qrq)5& f\[P27k4h_b˛5\h+WѓuJx rgEp|.\>d\} 7[qJЉ?o,@3ZvםJ%o7FB~o"@2Z 47Ayi[m䂴r=S[ %lGc[ :9zBHƃ?51;ܮ4 ,4ocdmі8[g(=Ax) px oz~_hD@ uFiz !byNU7"h~v[C^ik$lf{\vכob̈V SMP\f{jUis9b[7d[r"ukRE$u=Xzҍ"tC&VimeiMamy{ɮ5(µaۤCO}_j7쓦~A /`-MS i{t+T2*fGkͽoq?4GՃ&x@7fRfBsY[hKn sO=9 ^hI8I{uu ը^qp7'$pGԚ_d yKϱzFw2FS8eGy뷽w=5{٥Cs|щFC~e2.\VgЏ BbE'l(ZRv<"Ul.Y4((QٻFnlW̝d?vg0M#Nf^20X,l%N~*-bSFǖj!wxxv>gg./dr o8^Mla:)K!4:aԐgCl\Q=٭;s$njYX6sluVMlՕGq㏷f1Ps׾ъN ^t vaRnvY)3/ CĿN#Z Nrm[72Χbx1|dVڟdn3L!C:>ii cĽ$l.!t@LZ̦{W,Fڎ(D8!QLebt`Ixӿ 91nx_~eI: tOI~E;9(KF >Bij <=T D̍NvtX(uWpћE"T#;ʚ^gK;(֞N*yY jע)XsIS fihauyֻa\h{gwƎ`AF~ؽy9AIM^$`"*EօV}-ZCmU$aM8q:q4/Ap}Ĭ RHe7˲ nUU->ԥ]˷ }~1C&+Z!죐բU9փCU((këU,U({I0+_O}^e=Z`-hª%i(QD"zSQ\}o|EsW ê3LjËR!z%ww*;=|műo>ϗ<-VT@Wq3߆?|y6땻6onBo6 UhTP(BصL$aaD☧0LI%%zGIy˻!I&SL,X0cA0c(Uc+&`*(MB%Ɗ&)Ҩ=z6hDx$-\Lh`\kݘt4?56p:^\[S "f\*%$#˧X $C%1a k5Nӄ(rL2 eJJ K! 3 ,%>z3g5ˁLvzw~ݣ7M|x\ n`d߾QϹӑPB6=* ?x)FM? /XM <ؾ"ۿi2wyRiU (ސ7o1W<ߜ\'I4FqK4b$L00Xt'/L,KdM@Ăf /<×"2:מexx V!m4z, $$̠Vt#BEbQKDj\Rb0vPRy0SyͷP(>>g^귑uEB j#tk#]$uH]̚32x]yr+;ʋOMr5;9CA5\g\ Pa! Q0 !`񀌓2\G!WԚCe%i=R6g:0Cfy8JLC0D9ob 4XP$H "zgW/c ~2uo}f>3K *=~rFKta WAz-6ڱ}J@(~{}#K;:U/Gmsmi(sC7 JG ;j=bemv_ >;fWٳv?e7]|$fz2IJt Lwݎ7$uwlj.Ógc'R LI*YX#Ɂܞ+FM͹'g%ݛEɊ8俜E)1RuG@ogAt`rJЖٟVEZ×l/Ms&Tq~j z"/"K rKs̉QN<gg{,/dE\ùg5/P/ޝgpYhy|rI":ڎO.a5Yql(`Jl},BI (KE3efF]쇺6iVBaD")RŒ$VR&9lj5F R[ iR3rmU*=ܯ =@dLNTf֑A)q0!$9B8z 9Œ Yӵ$-t8t-E@-o^<} @ew@d)b"H!@}QlkHB Y18S:F YX Q Ur`bH2+2)2sChsLE @…I.`jݑr^e?#OrqCKO)m%H)2UPlEfQh)5OŮ|؝$WҢx ^Z+@΢ *f ]GRpjhѠ`ؙlZc]?Zg0ɝڂ"$"RQr"*I;V>.dՂl)%#{P@5j~,JŰ[\6Sma̠ ^6 5!P$W]ܣz^z6ݝmZH, wPңMߤsHrIn/rf1˃aN4"Q38L$4Ds. טK@cK\#RϿLI@CŠL5`X`RUރ^t2Y;>C]EͰgQ$6̀IbF*XH#+W1@%mmUV X!H{si[ #߃5uzakDZ-JgNdwñ!(rTTÙLD^ކ!*?fM9Ʋf7l/_[IN n7)EsIs&`%e>tr.dZI 4 pRL$  rH'HpLeHPˑ~/*!TS*w"c%i(Mԉũ ey 0WD} |,qBk!P//B&=znk{.?7K06ҬI|bI*3+9Ed]mown{s۝z6F4ȗa 9Jj[}4ϟ#UYF0wu{a牿!c I؎HB(N"PDt+!g4&nTȥ`w7󮣦6VxT9G{A PSXڑrOF#;=5ǧl1" ѷ%ECSh;T.BڤYe 4I(?{ܶsz,;:n{=g$AG,$俟IY(R8bg؍"^:ZaibJ, BXVT(c !bp8@U 0;XS]\)̥P HζB7+wECE.h\ pqpX(ЌrBw) JSKQETʮgKvż #4&.͑5"ez׀:߳aa!m0UBh3c,ÀOHbHe L}|.w ES pW81`H$8 [,t:p ʈ36*DjM1hUJ㽨he+E嫰 b\%44sre0P ( 0 IƂ #V7㫔ƼU5{!C5Q>C L2łY媫1ar HcA)<*eob,OI  d3 J\X õ$_KR]WQQ"e!cΙܬ]^Iw8QK~nQjߪQp) ].Ìnkjs"Qڐ(1F0E\qQ,a4L GԼ,c_Լ4ޟJ,)RIJn" +/7?`}?mY0rg>9m+K"" ir-ǀ2 q, s DJ}LAX]fKc0qyRk6zkƹr7c2tRKKVAEaҴ8Dv|hZ[M/ n["LR+$AWn d_ \/MΕ*)]{p[kAke >XЇ6NMQ9`QObiq!STZ/P-C&lP,i4&;ژDVc 4\`~ZULP{3ؑprO !=b,jO(& [*OT6>ix}MQ螬RA PWM~<Qsm ]h@$wrf Avvvkd>!(".abf "5Ӛ<ӆsItHP#lCm|mUD#6Kz@CD>dYD*1Y}TՄKj 7W^cEwd( FPPΜX6.Jt n k\߮yoK4d} tӵp5U&`eM;3|x/Z l%@슓Fʝt^$Eac)? 49J^quչ@hߥh 8@^_-ƙ8.&nw701br뽛AMJ1T#0 ]3G^')ʑju)X"3/'ʱu)(hb~SlW 0`Cgw.)ĥR,;KWytŁno-8Wu{ fP{8D!unz{ |P0 I|0/ ~tk0O68+D[ؖW"A*[r_mtt?`[_|64҉f)0 E ; %Q*ԟ{'.91]?W6B/3{cG؈n׍RDƳ>#M7e6c%N\?Rmtta$TaдaxHh T1}t3or{z}z2xNSL`m/Ϙ9)K~-FPV^:\͹Գ*<&GS*+M\ԓ*E/`=sBHcUG;?|e*B)B)B)B;,2[(XH*B.rՅ1AE(jSLb[+#iElCȔm|mW^Ks:B^s$A*~>lc|eb>l^ T%/; `:hCznUPjYf}f\o̖zo> QLQfejA=E_Bp"  Ѡ i @‚"5hca|}ԗ[`O!&Bz&OBb7eFFuxX:IEY\xB.\aC^Wv@?5g+ ,^ BNB7WLdKwf/s1|4sDo2@{=XJ@֊EoEO.dTR-DDJ{o|(((gC8`FY޾\|mKMPia٘BC*Ww'S4_Njӓ՗` 摔+ iIj:%O5fV$+Hn^T+ɼ QnrP:]T_%e֦ZJ.rmǨ>]Z3Ǘ]$_X"-';}lNpAg碁x?)HQ*\@zȽ)bt(rB6f%6ZQ h=>_i.`-JHΊn$Zǧӗj,F{m9QaŠ4W:%Q>鎎$K͕V)F=4Wt֭|OV|*ZFq.[7vºGuʶQĺms!6mWnUh7 :%
X;L:daÒuaCN>Mtpޚ8 .|u ~ȳs#F,o*>~.ڳb,-"cM%1gC؅;PS7n'?OF`~~pcvIH '@J7%/ZP<]_%JTa0A6఻T"EVvv F[1i~f^.;k\T'QəC Q9gXȢirj[̉X 񗝾'6-Gre^Yv˺ eh>@g鑼ȴqfny7d1 + JTJse=VwUAU,]V@>W:]dT*ad %B_8)Ju^Pi_ ȰtiĐ/+Tx.E<F3*,sI-yZ&y`mnE͑N 6_fTAVzQYTNM Y@L@3ͥW*ӹ0sqwͨ ,?3m3'H :<09(.gPSPs~^ө֣5:ZgѲz+v_V۟BĄS;#ң1efcUP`la!8N)xQDdMlCc0:>.Jshl^iŸmr")NeHs󧋻ZFv[|_]^09`#mB>H 1 3%O0lrI.:Or* G=vK]l̳>NaFa3E}s9xQi7UPPxp v: K}j͗x즍-U_l3^:V)< ^,qzA4}k8l 0 E8[ 2>{p'T[ck*wʻy3bn5ULJe\Y钌lnu-WLDWXWT tUd⴮Tv>ѓ/2JRaD܌ %V6JX+o 4e(Ԍ  ed~$2K,fdp67&QQs) ±̨@2R`).`PX49.STxyFr,2eF}U`I?3m32:4WZ<ڏ wuhS&R96pyn ++ ,*VTyP7zd|.î}d1.Ot]o-h<1|(Ahh5㴷}a@5wA‚ǂ1^ޠ9(a$D'3SĚcΓ1`[;{t_06uj֝\>?Qi_t̺o|մMv4$7WSS.&a(;@>Oښ٥.zQX],"f6L֍OVϮtrW7+!³B9[gVCgw6|s0!WvoKO]v. ILMI÷$U!yC6rZAZ>o x<[~sk;3A\nupO~pɜ\ɗ?gI^͹fףH[+2a %j&~]/mDx-PlڝsfqD.9]I@8 ]\1c@Xi\.AYws&jdL7ؠ"YØ. 3r9^Ge e9(JOLV2wd#qVI>2n-keFkmտgxlwͧs_%_ujL)s0&U]$࿫W DD rݔCd grJea~;'+*tT9x~hRިYjM满/zM]յ'a|S4mDyߦۻ(`[LO%K2^dP-{7f=aj;%d0 |uOE&,1;4=v.r.E'#zH$ '92dfz}p%}5nֵJ49:`-tRQ\:[딭R+">#:%ҿj2#硝NQNlG't*7bg$gĂq[==]ԇg#샖Kp&\zv1gD-3Oc ʼn"u$)6R=iԟp֭)`?hyN#Npt3ur}útCV}FB {X)|ĂU9@nƒETqQ).3ɜ "R-;7Ĝ́<)R9, Nܸf&}nl}nQN044A bch&~KQԱ}nj[ǂqBΞÇ >HGeApRq0뱗.ZF.<AaK寿%w HĔ{T}W61J;wL 1WǍͽ%]н$;ڛ}9[`&EaC4I2DZj)'\ o}ؔ>%몚![[/Ip*`ZR=t}7F+?~p[L$ˈe߬">3R$>N0V6aEҙͽ}j<Ѣ,' i>(YydU_-cr5M/cBċWFDq9# ^v93d/rg\}O #'z9W/sƐYt'2uƐ9GiN" ުtI:.Fo;_pFKyKe$L9Z$a tIH>EJ.|3 >/ٙJ9^B~}O#u<5r/ԥtcTKT,7>c\(.Cq/7ccQ6&,W2#'^1Pznl)14SCf% ˒Y!}6 \']},k>ݟKWo~LէuTS!׾j3Mi=1W[Kc5[b34UGξ7/j{(/W.ŀ03l"Z_}PuEJ}tMf˓h&B[SpZofy yYxЄ],W q='.s$.ful93$p*|2V?voV_? Zo_"fO ~ru]#4M dz,&kxrx!wpݾ?)m(3bg0Ȱ\)᧥u9p/ ~U]6ϫŵ;؎{- 9JKdVv%|dRm&|/f h!v&*^7ﯮ>$"ˢ.'L/MrfXj$1F0p}}3{5\UI(N$+9Ѿ S#R};J<杏nsbgq]WHL|c4'k[,'=EІӠO>&߶¸|c~Z}ቯ5zC8Εӻ.ְy|oi tmW73bيReE%ފ6` @BK[mTQUM)RZ@U!~ǛfSj;7޾KJgZ:?\x p訦X7AM<7/_AJ^~£م":Drf1n=%]6 (yi*cR*aL}FEF9a6Z|=e‘3U'(!x#ΕQ~MPf"5`D}O?gl(`v^²F䴙 Dwp89~`̉³FPɧ:F6!uec'bsv9^! } 'ܤ% }:gQt`bYɀK'̚>!Ժaq~AGx3#,D"S-#eb]n,fZz1[E'Ĵ"kۙJnp݁a$~٭w{831\4ΌYɥh TP1jc^qD^RʱɆ^9|,ncjw;7n2p^qNB@չ&T1[Cn0IkAir\IA{2~&9'Aff`l[YRte-$%Y R$QZlV}UbX4>ՃSLA;P0tuHJP똦 ||I)\śU$КUP2Y #JBbV([cӳL2@}uvɵo{,]';WthOgh'p*cUY$&Dؒ)ıyM{sI$6تèFn=7AI=eX6Q)@9\P ij"a*øZugG~{J{{")($i,מH å89l{Z-QcO9 {:LKpP5bOYcOoO#0ekOK5aXkEE1*w4fU+45*]wTrJ5ߥ2st(!Of1f WA,%Desxڦ^l˰op ; +#g52LhD82RQ+RO3E2qa!W>SvO"V(ܸUx6 i`?_N&Xy!ɰ: }f`tk*?zl wUmL!T$oֻg܌+wN+8dͬ߄Bb? ۭL^mo#F›4 Ն!a dߜH|9a\Qhv룙m.y{O"/&ɚf+O C\R^ k81N4D[̢Z9#p Kn1X^-5Tuf1R`< YɵR$S?0w2Ff[EidQp¢}za ,aRw=3Ҥ  ~H0CV`݊2d\)Vs@XTGM|,Iit?{dݔȫVW]-zPmy."3_Gޘ=TjY(&(VΒ_ȸs̕O s6aBD;O@ؔ)i!:Yf@UbRvzPGj_M(зafr#wSl^kȁ:N(q>f@Ӳ,#BY/{x` VS׭ SpU6ql+B|E) 󷙲򶼭rtB,yߩnc90X[#"3BW) L}{P5[&*D}KMzeL#b{7ތu/AkN#%Pť3H-X*PO;Di" 1,Mq)n0I1mU1/Hi2#F0 $qR ڲZsi[sdzb[uWs{$qwO`}FJc"a^c . KqO%y{)ax)L5ӇTk ~}ůD 27 G'G8s^wcTۘZ%B2ةʰk B}.D`^DۺzYF)␶q5SIar^YžlmELg\06U_`>Uֈ}{[E4.s.b͜Qg fL۬ Kbk;7ՂU_ױIǰ.TͬfDIsH͛xLDLx:]Bf|o%i 湢f<1˄lmv 9VDW@khSHkG6|Bjlr~bGL>ϻl+o zSߛ3v^;7؞~f>~+q7Hf6[nHkLwjqco墈t.Dܿst?DS|9O<_M&A&0|8NFxX?=Ѹ;p;'V2MconZӿ^t5E)@4=^Epr&w ΛL'Ep\l퓋6!JuѾk~ҺٛQqL`g|=L~4*վ;aJf$tSu_' qp<V/W s߀gE'+tFy:~oG_|?ד &姷7ɗW= } t>nwHXo| (\~)~[ҙ(׭O]0ğ~cu8u_,G*ws^ f2 ZGV:oWۊUһx՟]J>)X[_wsMXh>dC~7<[ W ӿ1;tz]8Ͱ߳tCz<#Ok7<7#i՗g./^ xʻ9z٧ף<*4j6x68'F %?s4 ~6nƳ~Us~s.]yk= ?N}a*?ٟb4z6:D"~ #w Ϡs>+>=.ug_s2l`#,r\Nb.4R/7nne۪ջyΙ2;xP;xu(?㜘1~`]o46L/9d:5/ }MxQz.yr:un</WNRh!R?)D^i4OS)4g)7 cTR s2ME8EqZݢ\yu08*`$x&c!ciiD`+e΄2]?% { %ׄxLq3T>Z-e\rdAsű7K"!ВƏ4܎;eY1fYkEhFxхDJ4hE0?pl3SMA()")ͅXp400fl4d0@@FqNL;!U^fK*TxvJj>@7.E\e 7iNc;u;իN~ף}6DOaHq %Ze2$d;uu(}u`B },z<۵S(klc/jq3|Y+X2l>k<~`u𹮮 )*KlQ!!a6]Rڎ5)Ƈ5CʠԪF BUGTX̒)Bؾw6KKb{.<`<? u5+\9g:k"Qn1)*qh,j5cp8y$%bJE"FӨSF29JCA[@<%PY-cc"˸SKBR.R `7t+CVrt%0pe) U]=_*X [KS33Js̻ƀ*6Ba\faTq\2jۮ-^b1@-p N!!>{`9(%},J\ŐpꤢsU8)89U@R)@k(M ;΋~]J.CJerUʋ:>v{{xOR/_Q ~YE v&.f)RX+0H5Ē8΄tƺ:i0Gg^Lv5*[Y/>W޴닺_4C,TqKAc5 =CDJ ±K.Ũ;wc)]E2XurM"rqL|FDF beTjR̦rS!v39έGVU2 d̥Y8(.Y-e0$O)]Ys9+ >mo ) q;Bcc{Ž7(ǒM)GQB nET"C^29נ4eBE [ y4. 5$AYS0qj 8" F -x9Ai 3`Fe(UY;FC1Xi5/ҭe>cTCz OO[-6jxɐO|x,Hxz=+#Vog'7pq#TKa|\ c tL6M)9PMlNn'W[eF+.{}8hWJ.-U5\QhH8lffp~s۪1ohcOXpv\e iQ{9ʲ!B>]1kԢ8nYh=YY+.M2sy:bPV5U=#Q3jZVGww2ƒry's\%ܲ"uv46&S.@wN|FVVsQ4_g- -F585@1ru85=w© 8Zc:v bN@Ʊw+CSKsG-* e%$6 bC;S`N-ͳa"[.ڴ:'_ZJsW t1v/}X{G.^tİ@GW^w/v+{7F|Nm_Yf\p cyQKI5qJ܌))qphzQ&MYe: qTZ\MYBNPਤ[}hKU)[{}W'(Wpgz>ՠ?#8# mg.vڏnXz,׮A3ҵHvӽKĕ%*8 -9xwO*~085Q+^!oLb:zLJ`!5z7%THWTqt8 몤9~P(WSQ 'R?'XK.hN(B[*ʕ߾=FFbЍs4;"ȮtCMMbH<ЯFHqN9fzXGr٢CNզBtX80"$i[9h&R9%!ks؞ӏI筄 r2:]|>Dyצ8i8IVaqa&O\1VPf8V3,~"HmBhCZ DFDBlעKI3/E/Ts%vk6pVgFNsQ !D/W[( ,R|iRXGiltIRˀ޸ijZYvB"X NF!\$|h E NўAj @-kZBZIu]bp Bg6hB@ t( ]h/R7\3T8@%mm 4^+jq6;(G7W~gϾ3QE"G4!R+`TV3*sB¢7B;ʁx42/A3=3e _49('3+Xḷ2RWhIA==pqH@ 4 AۈtaDEո Ts*3`&i[JqMgֆCm(NJ2]JmDPEzmۖ([9LA[IIAGeK2~IY/)e5˸ PMr~3q( CTPԉ0FsKE:7}J ?*ՙ fSO6qQP\x`( P!"+cA׈j1SaLz)"0KP5a+"BE­í!`n`Fo,V;3~JPT.!XE@OhEˠw e*DfZ;Z&DAxzy z0镞%FF1JX!!B9PQ8 ( >؈P'D4c-$e zNx9 V1J >4b"|҃!)ojsiB2f|WޤTZ k@/ -pvo*Uc5Hthu @^Q;oKåAu7^(!dyWJ‘PK2>Ek5p5A xቹUcr+ƘmԬ$KcPuIdr,A-gʿ+Z2V)uF)w? RJQY/˩@&_q]?'X( 2t5B!ӹ %Q 5pQ)ØA7E0)ơqMa{*\4-tR;I(-k? c C Ǖ!*X"ul0ggªl0N *R3*$ܽFWܭTnXJfVSϔBGPvҔSK.ӎ:I("K2NZjS ԌPK7DZĠdph6.[5P[7%Z=͜4G\dvX̀jj,dk)$QRF{:Ra NJkNqD,Hd^mT>2rPI !Bne%?f}DoM[J:rOjٽvvq=).H:Y/Lh!i$Z?|8?x-HN,~;<`-*q8ߏΧի5}s:Csb9{tg26iә߇Iz mvf 4bLt/j〒,;p9.Chek[Ϊa'~le?<\.Dkge8݊Fsb Zͅ:U冀 Mi Ujex*W6Zs m%MOZu:ܥK6%{fd_. ?gLaD+/$:kGq*ȉ\]{^SkLv 7ebB௎WgsZCvYgb+OC/ < Z?Z{d4#{z/~u_l78@xZjc(O$956 (;NX<=tK1;{.u)Hׁg:}N):DXց叓̇g*p |x"ՂsBM(Ձ'xx*s u&8XC:,txlZ,I2!%8dR7]!1F"oG_iEQބ;EYj磍'7_&Sٚt@{p0 a~99"Wݛ jL}W^ϯ~`2}n=}1A*g!>ܧ[ym^^֫,{I4π*ޠ+/F34#qE @ʑ!uY̢8u6)Uwqg|M ~r O6 7U^tQBLxZҝ=J-7{7`Zm"G2kBWBjCia䧲\|Rʼr`L@θC[M 0 ^jMnO0Q) )QqЄ=Bcc0/V) b;] 5_A5yfW,qedX2lɣo%8nAP0RuܣlqWR 5,Si=϶h0`gA,#wlSNp9p \vMwAƩƆi3Sm%p#<@@8Iav`'T#H),\:ƤQx.u͓LfrL;FSTD5߷YN˛,3e4R-V_L&&Z(8cV<('%Bm2Z9|K>$/_p"Z|cQn`%ɗeʦ&f!Qʊ?rOv*mUDX*1@;@)a.p[8+Bj[V(F AU \tII"L?uyyX]M+m8.VTWSc~55Ҹ4p2FhxII65NyDUeJ1Zm׋j+B@‚Jq1CR.<~ eާx7:a+MN ^A ۫:I]'<|ԏP;ξM'nh LHC1̿W"S#6h_9 ar=]xb}Ldq GuFFBpP\'6k0TV37I$j7DE (1tdd#mTZ8~c6r7W9y1?oj\\TVeAU4$AVԖneU$D'dpqi[Z$B@^tb?m-jk";_+owĴ?h:-z:wLM6#rdEr9I{˄9&j,' ЕP}itgaNK2]ksں*L^3g.+3}$ a|\bCt!\k $iEmvZ]&, "fW[l~ @Ǐ? W̟?Zkq.sᏵ\;u?RQx'x"Bъ:0Xb i@;m0MEv$`-<"H}- Qw_g Z7QrEaaO9aRؠtqC4EO-lӜW?hs3.G8W8fl}ɶV40i[zz2_*3HKlѷ> RPƙ5 (P͟]j?1UזV][Zumi-i5ib̔dATba.l1fXXL Ès-*vj;fيSqHD ie\׿q~CUݥ+" ñ:!z'#$[,D^=ֵzkX>.f$6i۶@BplJsKY٦T p9@,-%Vblh0whL CTn{8:~Ă.)>p7X_h X-nJ fyL8Dv="o՗d2)᤮2#Pv%6pH'koش1LKᲆS )O#4$ŷ փ ȵ 39?hB]Þ3|ugW#QN7=ɸ(}=X6&q3FzwЊͣ-{4 &A #ѡmY7ׇ/; 54V̔s?֚nfv"6mU;uoHNJ?K)2 WWuglqZg[ͨiťvF7ԮBB9:jY?..9XsQr,ɰNfXdvEr =̴KQP ~4rVCRw֟v+ia8yv2[9 6 ly|.vڇךQKΗQa?GoU2  cgXؿ+4G'{G=UJ7qU9_Ɨv@]hTh~ Z.IRn:PvWiA՝RUO`zZYBpU)4m\NLܯU#Y9mwkUEjHg;!hw|񪅡[ E4*8YeldFN3bWށSoݠ|{vjx0] Y"ì>4֑φ^âRn?^of9Q*k!PpuZln'6An3Iy%T㫽>n5K{-rǦ/.nhZ6ߞ*xl_q[,73unqզ5̇Žaj8V/`pS:Tr$$lY>8DXM/Q۠\3dl';ldNOwov; ).{Y>5DPzޕήu:UAˋYv@7?v+r\?k86FrR.GR=~8q3޺-54O#qF[OsF~cb`oM?v{d$ӃaO!Xj[9Y'3ywuߓ &W%N\S^S4ޕ[gzg0YWW誦HjtMgx_շ7! )jBVhҝ`ppwӞnO4nى.Wngk|R4? ݬȱe_*蜘+UݻhRr?u p͋bƶg>UTR} Ioп&(%/0k MTY\ӕ1e#_'=Q9-2Oʀ^[OevjƸX<x [d%Qt(= )o䈚8$4A2ua9k>Wok:Kk.VHq~'6OIϭI$Zd/Q9be#qc*΄JI3]# s3z`2;s=o>'4'R2%<$DB%&ԅ p%c`Dz ,qf9H@@1}%?k/.jj:BЦ@dq\N Iċ<5Wsz]؟2OutiJUPK< ;KK DŽ/ 'H4''*;?khvrcM7 A}e¸R0tD%EC$1 +G2'@٪&J(\RKxbAnAA@%%EJu$ - *?# vU{Lbk[’LpWM ܲG=F3{V%`XCT&&%ޓDM.6!2YlWHaH:`e0(t$tMrx3[ɮ 1$#[R)ǭ &hz&Â?-Ҋ%ɂrǸCpx>i[ɂr߳Qbg߃i8ֻAgG)3ynCL N3\r;!L0Fz h|وL Q3v[7mO.DiL zm[aZqn΍?of}o}5f15֯2Wޟů#l?_Hc_( } H =\|ZGyԇ ҇9>T _z»۝ , 9 + ^+T/ ջ;m + յ)uf[HrAhDBhD>;V>pQ)ֺni!19]NwÙdABu`l=Կέz2 7F:~ܖvez`ɈKccrݱ6w,aH0@hI>, ؾ%}$AwZ|S@/H~t>ʗҊORop5ng66 n@ az#T B&O0A䯈CП,@=Iq_hrףM'PMB ~'_OM''`1!Egc,ޅ~*}I<)0yn=I9@a6ld 5A_P &4D5b*{2m;)lp[MI,ld K)@”k!( ۴me@kkgIzd]3I=AZHʕIo0le l)tuY5 RHZ0<\) b(*)ePJiQOZdFmo删(!Hu[L^b?8Bb2HA 5D!I }F0.F-ɳ;R5vޕ$"mTiw0NcO eSMRnYH*VQm[_~|lˇV)D[gq*pEkOH t$&J rq/z0wGj9.HIMeeƉhdi \HH ӭpލ'Rno[߰lIϕ^͕r%n0lv5\}B.-IEdefeuk˻9!soMbi'g% 7>ryzr^<'-it~IbnxlC]qg穚;z3$w;f<Ϻ/7J E2X;m$O'*a814fy!zO֛qi,..~^Օ&}z-ß=:990UDEjP\$<uW%︾,w?fgY,0A+~{i>L£ZT ฽6o/?p##GĤrm# ঌ}mb;ib7u.Y ,iBR[ϨgO7!D{Tzb#;b('J(!;Їg0A4FI41I2b@B]qvJIO !TiJfIyO3xk/j -b/60@ [/}¢̝ )Rvk~F:pɴ.b{4֗j!t^&4fz4 ;M ʹfWr'"@S(\@zkmW#INUhcrxo?QϴhpUP{sF]̈́".˙Ye 뎐k4/ʋJdk_#:3/lj^/к_7~{mjyS@ϩ/ċr 8gCMc5UiqqUѡ";6VOE|!Aź2hlȫN߁IFY"d\j~hǶt6_}#uP/{%'*=(LG]'/0 r<#(uQ2upv"}x~]?"]jy)H.:tP!J5n5@k-ZtRsTmaw*TP̋ DhrDh Jy=eǘK!y?D-Ϙ */5R S& Dz})D1$L"(, Q{^V^p! o/+jdr4̽ʋTmVX+:QP2]ƨPݩVf")](ڵ[j`u*{_ GoKT +g^N9sfaQemӪU}H% >>PUj.aAV''mssqC8ן>y}STttrryHGӫUZ_3n&cX8˙^9]ݜ~ [cnΎxM;Ŕrё򛂪酿 Η7[PჷdxO#xp6zd( Lm.XE{/|NKߍب COj25RF9&!N a/O^Lxħ[X!fz"4l[90M(,pItΜjIQe x,1ؙP`hd&_T x,ٻlyi>a,z"]L-d4oi%n$3=G #=Y4 $\zɻ6%2O\4_2)n_gDܽ:J,`S<.G.@@ C=vρ_8HĄ F$b"0<gʞK08HD .$s rJn:@pi( 4xaѢJ2DZc0& 0VYqi4E>T} o)ɼL\32ѠG &FsρL]˽* l]"F+fbb5c SfEl %cX9R$1s`Bh$Y?њ":H d HDw @Iq/;K!) 0s 4zY'217f 4)ޥ-:#p,2:?Ha`.ȥ2)F)je|\FK4Q$r s Ɛ+eVbPrݪ0aH7M$b {ă>Uw,;tKYi>Mј܆0)0%QI@(`N5tuCUIm_<@)S oG`#F1~ YR4_<ލJՊFrYyjIZSV 0V*b$z\jn:rnrZrJDA/އlUjAzU/ZfdYk(QQ#ReAfTRq5zkdu4RҞ(S [~m&"I+UherMG^j"ndǏg.G$01M@r0,ؑt. RFIJ Xͅ{,Ij_&6܎u+ ް‡;h4h}G'/V1g7ܸo20 IjZ6z~3ʫ|H~T5nqPݸ';#jJRfZNqEp2QkE('` CB/?rh#Q1[3@=t6+M5;Y /J,k,5v,Ӎ!3zV1m!1m#zk-:!ذh\X s!;Z e؇1VWeZC`!~1J,̑PN34KBX4 O(rkFu#ka S.Mб`Zx@dу|y!9D-D ܄B34'֩E1=X_1{woOmd[z9 5gtn UFPʸr{h;;m VG( w0_ @sƁ JTb0s,=- FC'ރ ̤)1ݙ"H.~{"PtzI|&l|`fV3 9slȟu [|ϼ%w[}$Q*-?OV]!qM~~19uvyrz~L *zTow7'稦`;`->9/[#om ٯ]LŔ H5~TGc 6莰I5zc}D/}!n/APJH} qfB 8wJXVR%w(_Ώ.υW3[ԬS|յuҙn/kڥ`zm#y!kُ߾},q͟Y=+cAP1*K­&4}>ZF mR na~,;B+7+n/'kZ0t+Mm7>/'g|+-3J5_ѼW|_(ق-NBj0\}j(nVjUtU3*4z(o߼ĥ/ڃ6ySGyqqvtK@: h!x~f$, 7ާ@5 +EL gy4Zsyif>ϫ:O`ɦeA'Hg{:l4{]us ^8ę~طmmWeV?|8s 1h/4"fخvػᑥQ)rIJ:+m$7_BZއe H2Fw=/HSnTebwj<>ȯdUU[dhlg>=lmF*Cp0( #&5”\TSJCP aoH]Uss_&_ Tj>'^=zEUϿ KrnwAۅa~o/"bԟFgߞMbjCŪ8oA{j͟A-\6#YvskL!9̟BM~uSއ?s cC9SԞIf=qiV ;h"mS8xëz_h t&D^X\, M(?fy)/_5CSs'{7TwQ_{=,X)Ux<<8ztIv7Xj9x)F[}e20e/Rj]E Ă xK$, 6c*w\^7ynUop'[wWGȲ"I]J4EkUXxޕS>|Ʊ LOgLqXޠ:K^zR@Yyɂ70`Ҁ 6]7vmȪ#?+qϷRxq16zdh'%;;A)QHNe 5[Y  )s>ܟ!іa9͸+g,B)&sbJI9񎵫}7=EkRS}Ld`^Үa؏'DHDKq QG]F"A MO>k+#Э5U? Pݛ[-3UuwffZdcێĐ95R4EͅKƔJb1+,hYά۾X *CgLP(f\C$4laUd@"ÌeL+@`8$PCvCTIƃ^`>d5oC>U`k#۔B 9 nd8B?77Z y''Zε8YrEwIxu^Quh)xc( =a#WKǡ; E𛫎k!XD-V+)ݏQ(S+%.2up)42Dτ = Tu`J~I`n?%n*m#rCZlHwJ)3/t`|P-dOSFm>Ú@?.Khl,Q!CXd^ rk 4_waJ2d/vu k>g^-io,OPx=8rAm]8竹]7TZ| %`h*Ycfz]uc<3kȐꈬ_&g'A.(%1"l > ='OaR{xSnQ -~,0Cl>N%;pr +7\&B=muGfߦ  s`JHg %YZa+80|[!l)u IiЪ]?QcTU Ȥulq_Ԗ[Ŀn]ekSgMrVs28߲^:EV"͝]Vsaz1ST ?U5 &v9kk=fɮ :O5ž$edBB*bҊ`?(RPтs-(KT`22y{f闻?բB6t % K-ₕt9t2S"vcBP%Cy^s* P׮CPt011}  Wb^hjLN~( 4Am4q_pf$\3)rFZP]Q.@ 0qa_KE7ӽ˸@)p Tla"EX|5qxOg_(ZL `F~<1T ´Qkըz(þhawB|%B'%p:)^^1NVc?zGų1 fy !Fn&j~;ˈOd|var/home/core/zuul-output/logs/kubelet.log0000644000000000000000003715405615145455167017725 0ustar rootrootFeb 19 00:07:32 crc systemd[1]: Starting Kubernetes Kubelet... Feb 19 00:07:32 crc restorecon[4684]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:32 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:33 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 00:07:34 crc restorecon[4684]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 00:07:34 crc restorecon[4684]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 19 00:07:34 crc kubenswrapper[4825]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 00:07:34 crc kubenswrapper[4825]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 00:07:34 crc kubenswrapper[4825]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 00:07:34 crc kubenswrapper[4825]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 00:07:34 crc kubenswrapper[4825]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 19 00:07:34 crc kubenswrapper[4825]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.765635 4825 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.771979 4825 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772027 4825 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772038 4825 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772047 4825 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772058 4825 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772068 4825 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772077 4825 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772086 4825 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772095 4825 feature_gate.go:330] unrecognized feature gate: Example Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772104 4825 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772112 4825 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772132 4825 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772145 4825 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772156 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772167 4825 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772176 4825 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772185 4825 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772194 4825 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772202 4825 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772212 4825 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772221 4825 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772229 4825 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772238 4825 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772249 4825 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772260 4825 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772270 4825 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772279 4825 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772287 4825 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772296 4825 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772304 4825 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772313 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772321 4825 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772329 4825 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772338 4825 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772346 4825 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772354 4825 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772363 4825 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772372 4825 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772383 4825 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772393 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772402 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772411 4825 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772419 4825 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772429 4825 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772437 4825 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772446 4825 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772455 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772468 4825 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772479 4825 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772490 4825 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772501 4825 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772540 4825 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772551 4825 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772561 4825 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772570 4825 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772579 4825 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772589 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772597 4825 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772606 4825 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772615 4825 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772624 4825 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772632 4825 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772641 4825 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772653 4825 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772661 4825 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772672 4825 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772680 4825 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772689 4825 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772698 4825 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772706 4825 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.772717 4825 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.773803 4825 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.773828 4825 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.773847 4825 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.773860 4825 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.773873 4825 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.773883 4825 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.773902 4825 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.773914 4825 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.773924 4825 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.773935 4825 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.773945 4825 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.773955 4825 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.773967 4825 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.773977 4825 flags.go:64] FLAG: --cgroup-root="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.773986 4825 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.773996 4825 flags.go:64] FLAG: --client-ca-file="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774005 4825 flags.go:64] FLAG: --cloud-config="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774015 4825 flags.go:64] FLAG: --cloud-provider="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774027 4825 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774039 4825 flags.go:64] FLAG: --cluster-domain="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774049 4825 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774059 4825 flags.go:64] FLAG: --config-dir="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774068 4825 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774079 4825 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774093 4825 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774102 4825 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774112 4825 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774122 4825 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774132 4825 flags.go:64] FLAG: --contention-profiling="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774142 4825 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774152 4825 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774162 4825 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774173 4825 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774184 4825 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774196 4825 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774206 4825 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774216 4825 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774226 4825 flags.go:64] FLAG: --enable-server="true" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774236 4825 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774248 4825 flags.go:64] FLAG: --event-burst="100" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774259 4825 flags.go:64] FLAG: --event-qps="50" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774268 4825 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774278 4825 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774289 4825 flags.go:64] FLAG: --eviction-hard="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774300 4825 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774310 4825 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774319 4825 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774329 4825 flags.go:64] FLAG: --eviction-soft="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774341 4825 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774350 4825 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774361 4825 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774371 4825 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774380 4825 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774390 4825 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774401 4825 flags.go:64] FLAG: --feature-gates="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774413 4825 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774423 4825 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774434 4825 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774444 4825 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774454 4825 flags.go:64] FLAG: --healthz-port="10248" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774464 4825 flags.go:64] FLAG: --help="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774474 4825 flags.go:64] FLAG: --hostname-override="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774483 4825 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774493 4825 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774535 4825 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774546 4825 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774555 4825 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774565 4825 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774575 4825 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774584 4825 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774594 4825 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774604 4825 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774614 4825 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774624 4825 flags.go:64] FLAG: --kube-reserved="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774633 4825 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774644 4825 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774653 4825 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774663 4825 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774673 4825 flags.go:64] FLAG: --lock-file="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774682 4825 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774692 4825 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774702 4825 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774717 4825 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774726 4825 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774737 4825 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774748 4825 flags.go:64] FLAG: --logging-format="text" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774757 4825 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774767 4825 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774777 4825 flags.go:64] FLAG: --manifest-url="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774786 4825 flags.go:64] FLAG: --manifest-url-header="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774799 4825 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774809 4825 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774821 4825 flags.go:64] FLAG: --max-pods="110" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774831 4825 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774841 4825 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774851 4825 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774863 4825 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774873 4825 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774883 4825 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774900 4825 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774923 4825 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774933 4825 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774943 4825 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774953 4825 flags.go:64] FLAG: --pod-cidr="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774962 4825 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774976 4825 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774986 4825 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.774996 4825 flags.go:64] FLAG: --pods-per-core="0" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775006 4825 flags.go:64] FLAG: --port="10250" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775015 4825 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775025 4825 flags.go:64] FLAG: --provider-id="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775035 4825 flags.go:64] FLAG: --qos-reserved="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775045 4825 flags.go:64] FLAG: --read-only-port="10255" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775055 4825 flags.go:64] FLAG: --register-node="true" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775064 4825 flags.go:64] FLAG: --register-schedulable="true" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775074 4825 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775089 4825 flags.go:64] FLAG: --registry-burst="10" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775099 4825 flags.go:64] FLAG: --registry-qps="5" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775109 4825 flags.go:64] FLAG: --reserved-cpus="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775118 4825 flags.go:64] FLAG: --reserved-memory="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775131 4825 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775142 4825 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775152 4825 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775162 4825 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775172 4825 flags.go:64] FLAG: --runonce="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775182 4825 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775192 4825 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775202 4825 flags.go:64] FLAG: --seccomp-default="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775213 4825 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775223 4825 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775233 4825 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775246 4825 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775256 4825 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775266 4825 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775276 4825 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775285 4825 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775295 4825 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775305 4825 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775315 4825 flags.go:64] FLAG: --system-cgroups="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775325 4825 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775341 4825 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775350 4825 flags.go:64] FLAG: --tls-cert-file="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775360 4825 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775371 4825 flags.go:64] FLAG: --tls-min-version="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775383 4825 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775395 4825 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775407 4825 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775420 4825 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775431 4825 flags.go:64] FLAG: --v="2" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775446 4825 flags.go:64] FLAG: --version="false" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775462 4825 flags.go:64] FLAG: --vmodule="" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775538 4825 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.775555 4825 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775800 4825 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775815 4825 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775825 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775836 4825 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775844 4825 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775853 4825 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775862 4825 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775872 4825 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775881 4825 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775890 4825 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775904 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775912 4825 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775920 4825 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775929 4825 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775938 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775949 4825 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775959 4825 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775968 4825 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775976 4825 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775985 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.775994 4825 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776003 4825 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776011 4825 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776019 4825 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776028 4825 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776036 4825 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776046 4825 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776054 4825 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776063 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776071 4825 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776080 4825 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776089 4825 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776097 4825 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776105 4825 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776114 4825 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776125 4825 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776136 4825 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776146 4825 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776155 4825 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776165 4825 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776175 4825 feature_gate.go:330] unrecognized feature gate: Example Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776184 4825 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776196 4825 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776204 4825 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776213 4825 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776221 4825 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776229 4825 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776238 4825 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776247 4825 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776255 4825 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776264 4825 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776273 4825 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776282 4825 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776291 4825 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776299 4825 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776307 4825 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776315 4825 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776324 4825 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776333 4825 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776344 4825 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776355 4825 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776365 4825 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776376 4825 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776385 4825 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776394 4825 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776403 4825 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776412 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776421 4825 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776430 4825 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776439 4825 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.776448 4825 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.776475 4825 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.794798 4825 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.795786 4825 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796044 4825 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796079 4825 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796088 4825 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796099 4825 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796109 4825 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796119 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796128 4825 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796137 4825 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796145 4825 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796154 4825 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796163 4825 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796171 4825 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796180 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796188 4825 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796196 4825 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796205 4825 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796213 4825 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796221 4825 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796229 4825 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796236 4825 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796244 4825 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796252 4825 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796259 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796267 4825 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796275 4825 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796283 4825 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796292 4825 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796299 4825 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796314 4825 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796326 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796337 4825 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796347 4825 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796357 4825 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796369 4825 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796383 4825 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796393 4825 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796401 4825 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796410 4825 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796419 4825 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796428 4825 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796436 4825 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796444 4825 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796453 4825 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796461 4825 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796469 4825 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796477 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796485 4825 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796495 4825 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796541 4825 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796550 4825 feature_gate.go:330] unrecognized feature gate: Example Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796559 4825 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796567 4825 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796575 4825 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796583 4825 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796591 4825 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796599 4825 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796608 4825 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796617 4825 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796625 4825 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796633 4825 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796641 4825 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796651 4825 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796661 4825 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796670 4825 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796679 4825 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796687 4825 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796696 4825 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796704 4825 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796712 4825 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796720 4825 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.796728 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.796741 4825 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797027 4825 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797047 4825 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797056 4825 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797072 4825 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797088 4825 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797099 4825 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797109 4825 feature_gate.go:330] unrecognized feature gate: Example Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797118 4825 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797129 4825 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797139 4825 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797148 4825 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797158 4825 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797170 4825 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797179 4825 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797187 4825 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797195 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797205 4825 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797214 4825 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797228 4825 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797406 4825 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797428 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797442 4825 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797452 4825 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797461 4825 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797471 4825 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797479 4825 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797487 4825 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797495 4825 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797535 4825 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797546 4825 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797556 4825 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797564 4825 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797572 4825 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797580 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797588 4825 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797597 4825 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797604 4825 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797612 4825 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797620 4825 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797628 4825 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797637 4825 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797645 4825 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797653 4825 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797660 4825 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797668 4825 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797676 4825 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797685 4825 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797694 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797702 4825 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797712 4825 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797723 4825 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797732 4825 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797743 4825 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797752 4825 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797765 4825 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797779 4825 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797790 4825 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797801 4825 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797810 4825 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797818 4825 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797826 4825 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797833 4825 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797841 4825 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797849 4825 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797857 4825 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797864 4825 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797873 4825 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797881 4825 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797889 4825 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797896 4825 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.797904 4825 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.797916 4825 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.798433 4825 server.go:940] "Client rotation is on, will bootstrap in background" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.806149 4825 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.806353 4825 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.809817 4825 server.go:997] "Starting client certificate rotation" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.809888 4825 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.810182 4825 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-25 03:21:01.530474056 +0000 UTC Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.810386 4825 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.838386 4825 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 00:07:34 crc kubenswrapper[4825]: E0219 00:07:34.842481 4825 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.207:6443: connect: connection refused" logger="UnhandledError" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.844432 4825 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.867004 4825 log.go:25] "Validated CRI v1 runtime API" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.904445 4825 log.go:25] "Validated CRI v1 image API" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.906770 4825 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.915704 4825 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-19-00-01-32-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.915799 4825 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.951694 4825 manager.go:217] Machine: {Timestamp:2026-02-19 00:07:34.947258888 +0000 UTC m=+0.638225035 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:5c6d8be3-81c6-4c6a-89a0-311f75474e3b BootID:7148ccf5-3e32-4158-b08b-88a47cea7ade Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:25:b6:a3 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:25:b6:a3 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b1:55:ba Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:bb:18:06 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:72:2d:9c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f6:79:87 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:92:2e:ac:b0:ad:24 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:aa:15:9d:fe:53:14 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.952145 4825 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.952590 4825 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.956650 4825 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.957045 4825 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.957109 4825 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.957612 4825 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.957640 4825 container_manager_linux.go:303] "Creating device plugin manager" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.958288 4825 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.958358 4825 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.958670 4825 state_mem.go:36] "Initialized new in-memory state store" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.959287 4825 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.963228 4825 kubelet.go:418] "Attempting to sync node with API server" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.963271 4825 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.963368 4825 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.963408 4825 kubelet.go:324] "Adding apiserver pod source" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.963435 4825 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.968756 4825 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.971562 4825 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.972284 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.972339 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:34 crc kubenswrapper[4825]: E0219 00:07:34.972438 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.207:6443: connect: connection refused" logger="UnhandledError" Feb 19 00:07:34 crc kubenswrapper[4825]: E0219 00:07:34.972438 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.207:6443: connect: connection refused" logger="UnhandledError" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.974153 4825 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.976233 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.976281 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.976298 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.976314 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.976339 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.976361 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.976377 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.976402 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.976426 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.976443 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.976479 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.976496 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.977633 4825 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.978649 4825 server.go:1280] "Started kubelet" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.981204 4825 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:34 crc systemd[1]: Started Kubernetes Kubelet. Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.981838 4825 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.982073 4825 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.982170 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.982219 4825 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.987443 4825 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.987503 4825 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.988153 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:28:49.486141195 +0000 UTC Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.988233 4825 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.988966 4825 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 00:07:34 crc kubenswrapper[4825]: W0219 00:07:34.989858 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:34 crc kubenswrapper[4825]: E0219 00:07:34.989959 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.207:6443: connect: connection refused" logger="UnhandledError" Feb 19 00:07:34 crc kubenswrapper[4825]: E0219 00:07:34.990552 4825 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:07:34 crc kubenswrapper[4825]: E0219 00:07:34.990935 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" interval="200ms" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.991726 4825 factory.go:55] Registering systemd factory Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.991766 4825 factory.go:221] Registration of the systemd container factory successfully Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.993199 4825 factory.go:153] Registering CRI-O factory Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.993239 4825 factory.go:221] Registration of the crio container factory successfully Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.993650 4825 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.993718 4825 factory.go:103] Registering Raw factory Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.993773 4825 manager.go:1196] Started watching for new ooms in manager Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.994950 4825 manager.go:319] Starting recovery of all containers Feb 19 00:07:34 crc kubenswrapper[4825]: E0219 00:07:34.995185 4825 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.207:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18957d2bacef3b33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:07:34.978583347 +0000 UTC m=+0.669549434,LastTimestamp:2026-02-19 00:07:34.978583347 +0000 UTC m=+0.669549434,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:07:34 crc kubenswrapper[4825]: I0219 00:07:34.999287 4825 server.go:460] "Adding debug handlers to kubelet server" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.006731 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.006919 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.006951 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.006985 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007012 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007037 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007062 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007090 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007121 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007152 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007182 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007290 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007317 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007354 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007394 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007421 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007449 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007474 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007669 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007718 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007748 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007776 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007851 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007881 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007911 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.007938 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008027 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008062 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008099 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008127 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008148 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008176 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008207 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008234 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008261 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008284 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008304 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008330 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008350 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008370 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008389 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008409 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008430 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008452 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008472 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008495 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008548 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008567 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008588 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008609 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008630 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008651 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008679 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.008701 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009627 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009665 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009689 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009711 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009730 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009748 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009766 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009783 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009807 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009825 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009844 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009863 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009882 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009901 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009923 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009939 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009954 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009972 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.009987 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010006 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010022 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010038 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010055 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010075 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010091 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010108 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010125 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010144 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010161 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010179 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010198 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010218 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010235 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010254 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010271 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010288 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010306 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010323 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010343 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010361 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010378 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010396 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010478 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010497 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010532 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010575 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010593 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010611 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010670 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010690 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010719 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010740 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010759 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010780 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010800 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010821 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010839 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010856 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010873 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010891 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010908 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010924 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010943 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010960 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010977 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.010993 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011009 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011024 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011050 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011066 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011085 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011101 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011116 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011132 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011147 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011166 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011183 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011202 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011219 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011235 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011251 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011267 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011282 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011299 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011314 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.011331 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013026 4825 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013084 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013108 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013147 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013164 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013181 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013197 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013233 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013249 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013266 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013298 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013317 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013333 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013387 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013408 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013424 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013458 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013479 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013497 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013538 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013554 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013571 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013611 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013629 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013646 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013663 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013697 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013714 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013731 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013748 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013789 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013807 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013822 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013863 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013881 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013899 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013936 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013954 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013970 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.013987 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014022 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014039 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014057 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014094 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014111 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014127 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014143 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014181 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014197 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014215 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014250 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014266 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014281 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014298 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014334 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014352 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014367 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014382 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014419 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014435 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014450 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014465 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014497 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014527 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014591 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014605 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014621 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014655 4825 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014670 4825 reconstruct.go:97] "Volume reconstruction finished" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.014682 4825 reconciler.go:26] "Reconciler: start to sync state" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.026023 4825 manager.go:324] Recovery completed Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.046806 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.049395 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.049452 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.049464 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.050629 4825 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.050652 4825 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.050678 4825 state_mem.go:36] "Initialized new in-memory state store" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.061815 4825 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.064581 4825 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.064640 4825 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.064677 4825 kubelet.go:2335] "Starting kubelet main sync loop" Feb 19 00:07:35 crc kubenswrapper[4825]: E0219 00:07:35.064739 4825 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.065007 4825 policy_none.go:49] "None policy: Start" Feb 19 00:07:35 crc kubenswrapper[4825]: W0219 00:07:35.065927 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:35 crc kubenswrapper[4825]: E0219 00:07:35.066051 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.207:6443: connect: connection refused" logger="UnhandledError" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.066968 4825 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.067115 4825 state_mem.go:35] "Initializing new in-memory state store" Feb 19 00:07:35 crc kubenswrapper[4825]: E0219 00:07:35.091685 4825 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.132802 4825 manager.go:334] "Starting Device Plugin manager" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.133430 4825 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.133464 4825 server.go:79] "Starting device plugin registration server" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.134191 4825 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.134244 4825 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.134797 4825 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.134923 4825 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.134932 4825 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 00:07:35 crc kubenswrapper[4825]: E0219 00:07:35.145828 4825 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.165570 4825 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.165767 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.167202 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.167280 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.167296 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.167607 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.167711 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.167774 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.169034 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.169081 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.169130 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.169150 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.169099 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.169188 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.169432 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.169712 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.169811 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.170660 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.170704 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.170722 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.170911 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.171081 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.171139 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.171763 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.171811 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.171830 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.172011 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.172044 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.172076 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.172100 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.172144 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.172171 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.172912 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.172963 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.172984 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.173279 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.173316 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.173334 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.173414 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.173442 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.173467 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.173791 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.173833 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.175077 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.175136 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.175162 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4825]: E0219 00:07:35.192605 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" interval="400ms" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.217210 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.217252 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.217279 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.217297 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.217314 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.217331 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.217398 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.217488 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.217603 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.217654 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.217695 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.217758 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.217791 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.217816 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.217842 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.235196 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.236903 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.237023 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.237086 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.237166 4825 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 00:07:35 crc kubenswrapper[4825]: E0219 00:07:35.237890 4825 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.207:6443: connect: connection refused" node="crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.319026 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.319114 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.319160 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.319192 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.319223 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.319252 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.319282 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.319312 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.319345 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.319420 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.319453 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.319483 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.319543 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.319575 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.319604 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.320048 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.320131 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.320157 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.320186 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.320206 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.320115 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.320056 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.320252 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.320348 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.320385 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.320580 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.320117 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.320317 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.320209 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.320109 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.438743 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.441151 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.441231 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.441262 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.441315 4825 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 00:07:35 crc kubenswrapper[4825]: E0219 00:07:35.442232 4825 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.207:6443: connect: connection refused" node="crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.498210 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.505153 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.518276 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.540143 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.545677 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:07:35 crc kubenswrapper[4825]: W0219 00:07:35.559892 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a9af009d3620c9e97aa3647ba822272baab11e97b83df94d58820ad8e68b29a3 WatchSource:0}: Error finding container a9af009d3620c9e97aa3647ba822272baab11e97b83df94d58820ad8e68b29a3: Status 404 returned error can't find the container with id a9af009d3620c9e97aa3647ba822272baab11e97b83df94d58820ad8e68b29a3 Feb 19 00:07:35 crc kubenswrapper[4825]: W0219 00:07:35.569839 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a881bb4509d9f315dff0557fed072f8ffff5ecf8a6370ed30e7ac0c5b6678cc0 WatchSource:0}: Error finding container a881bb4509d9f315dff0557fed072f8ffff5ecf8a6370ed30e7ac0c5b6678cc0: Status 404 returned error can't find the container with id a881bb4509d9f315dff0557fed072f8ffff5ecf8a6370ed30e7ac0c5b6678cc0 Feb 19 00:07:35 crc kubenswrapper[4825]: W0219 00:07:35.586496 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-dbfc4e5245c9f34bfecbcbf4218c14b7fd585ace2a38a93600ab9e8bea7a5c25 WatchSource:0}: Error finding container dbfc4e5245c9f34bfecbcbf4218c14b7fd585ace2a38a93600ab9e8bea7a5c25: Status 404 returned error can't find the container with id dbfc4e5245c9f34bfecbcbf4218c14b7fd585ace2a38a93600ab9e8bea7a5c25 Feb 19 00:07:35 crc kubenswrapper[4825]: E0219 00:07:35.593677 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" interval="800ms" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.843366 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.845135 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.845188 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.845206 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.845244 4825 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 00:07:35 crc kubenswrapper[4825]: E0219 00:07:35.845882 4825 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.207:6443: connect: connection refused" node="crc" Feb 19 00:07:35 crc kubenswrapper[4825]: W0219 00:07:35.870530 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:35 crc kubenswrapper[4825]: E0219 00:07:35.870612 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.207:6443: connect: connection refused" logger="UnhandledError" Feb 19 00:07:35 crc kubenswrapper[4825]: W0219 00:07:35.921075 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:35 crc kubenswrapper[4825]: E0219 00:07:35.921173 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.207:6443: connect: connection refused" logger="UnhandledError" Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.982647 4825 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:35 crc kubenswrapper[4825]: I0219 00:07:35.988821 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 07:49:10.533877121 +0000 UTC Feb 19 00:07:36 crc kubenswrapper[4825]: W0219 00:07:36.008110 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:36 crc kubenswrapper[4825]: E0219 00:07:36.008201 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.207:6443: connect: connection refused" logger="UnhandledError" Feb 19 00:07:36 crc kubenswrapper[4825]: W0219 00:07:36.055681 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:36 crc kubenswrapper[4825]: E0219 00:07:36.055755 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.207:6443: connect: connection refused" logger="UnhandledError" Feb 19 00:07:36 crc kubenswrapper[4825]: I0219 00:07:36.070397 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a881bb4509d9f315dff0557fed072f8ffff5ecf8a6370ed30e7ac0c5b6678cc0"} Feb 19 00:07:36 crc kubenswrapper[4825]: I0219 00:07:36.071576 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9e990894e71fc5e19a177ad7276d937f56c23cc39bf3741f8bb1bfe0f512cb65"} Feb 19 00:07:36 crc kubenswrapper[4825]: I0219 00:07:36.072586 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a9af009d3620c9e97aa3647ba822272baab11e97b83df94d58820ad8e68b29a3"} Feb 19 00:07:36 crc kubenswrapper[4825]: I0219 00:07:36.073594 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5cce9922fa952a8b06b094ef8321d5c25ccadc3fe1b39f315e00059ef58f3352"} Feb 19 00:07:36 crc kubenswrapper[4825]: I0219 00:07:36.074604 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"dbfc4e5245c9f34bfecbcbf4218c14b7fd585ace2a38a93600ab9e8bea7a5c25"} Feb 19 00:07:36 crc kubenswrapper[4825]: E0219 00:07:36.394973 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" interval="1.6s" Feb 19 00:07:36 crc kubenswrapper[4825]: I0219 00:07:36.646046 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:36 crc kubenswrapper[4825]: I0219 00:07:36.648406 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:36 crc kubenswrapper[4825]: I0219 00:07:36.648457 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:36 crc kubenswrapper[4825]: I0219 00:07:36.648471 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:36 crc kubenswrapper[4825]: I0219 00:07:36.648528 4825 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 00:07:36 crc kubenswrapper[4825]: E0219 00:07:36.649150 4825 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.207:6443: connect: connection refused" node="crc" Feb 19 00:07:36 crc kubenswrapper[4825]: I0219 00:07:36.847040 4825 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 00:07:36 crc kubenswrapper[4825]: E0219 00:07:36.848428 4825 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.207:6443: connect: connection refused" logger="UnhandledError" Feb 19 00:07:36 crc kubenswrapper[4825]: I0219 00:07:36.983116 4825 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:36 crc kubenswrapper[4825]: I0219 00:07:36.989786 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:24:19.039515754 +0000 UTC Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.082178 4825 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea" exitCode=0 Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.082314 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea"} Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.082410 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.084342 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.084415 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.084435 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.086856 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6"} Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.086933 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90"} Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.086965 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88"} Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.090062 4825 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091" exitCode=0 Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.090320 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.090690 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091"} Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.092808 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.092864 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.092885 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.095635 4825 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3aac5e64f80becccbeb75b9d7c8ea3c9eb6130da2ef997f35f432e929150aa10" exitCode=0 Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.095761 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3aac5e64f80becccbeb75b9d7c8ea3c9eb6130da2ef997f35f432e929150aa10"} Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.095805 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.097161 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.098082 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.098124 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.098142 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.098403 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.098446 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.098464 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.098856 4825 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b" exitCode=0 Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.098894 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b"} Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.099000 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.103459 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.103539 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.103559 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:37 crc kubenswrapper[4825]: W0219 00:07:37.892097 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:37 crc kubenswrapper[4825]: E0219 00:07:37.892223 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.207:6443: connect: connection refused" logger="UnhandledError" Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.982108 4825 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:37 crc kubenswrapper[4825]: I0219 00:07:37.990548 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:18:44.833002641 +0000 UTC Feb 19 00:07:37 crc kubenswrapper[4825]: E0219 00:07:37.995869 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" interval="3.2s" Feb 19 00:07:38 crc kubenswrapper[4825]: W0219 00:07:38.058648 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:38 crc kubenswrapper[4825]: E0219 00:07:38.058994 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.207:6443: connect: connection refused" logger="UnhandledError" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.103741 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.103770 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d5ba6ec8bfc6a365ea9af422f7c6ec0479adbeef9d26a7afe8c66f4ba339482b"} Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.104057 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d3479bc97f163b6b20bd4ff73f1b4c6c4a984f626a5ab0bfbf38ea47f03ea88a"} Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.104097 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e3399fe5505d69fda5547fa2b30a745b1e14c3b4efc70d848b052db43fd8d65a"} Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.105218 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.105311 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.105382 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.106455 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68"} Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.106688 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:38 crc kubenswrapper[4825]: W0219 00:07:38.109216 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:38 crc kubenswrapper[4825]: E0219 00:07:38.109321 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.207:6443: connect: connection refused" logger="UnhandledError" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.109394 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.109485 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.109526 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.112671 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5"} Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.112722 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25"} Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.112737 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346"} Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.115188 4825 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f8120c8cc3f0d62fe1f26cd01914563f0665f0bb830e70e90ac6f64709e502c0" exitCode=0 Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.115253 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f8120c8cc3f0d62fe1f26cd01914563f0665f0bb830e70e90ac6f64709e502c0"} Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.115327 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.116065 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.116101 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.116113 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.118426 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"39bb2cff8e4fc6646f9ef1474f78f40ba15d5a6ebc3a650f358f0dc714a70652"} Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.118499 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.119093 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.119119 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.119128 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.249354 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.250752 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.250793 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.250803 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.250836 4825 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 00:07:38 crc kubenswrapper[4825]: E0219 00:07:38.251417 4825 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.207:6443: connect: connection refused" node="crc" Feb 19 00:07:38 crc kubenswrapper[4825]: W0219 00:07:38.330560 4825 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:38 crc kubenswrapper[4825]: E0219 00:07:38.330644 4825 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.207:6443: connect: connection refused" logger="UnhandledError" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.761485 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.838361 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.982063 4825 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.207:6443: connect: connection refused Feb 19 00:07:38 crc kubenswrapper[4825]: I0219 00:07:38.991306 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:49:21.260911472 +0000 UTC Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.016601 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.126126 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8489d08f008569c177df432bf2615fa68c190984abae736fafa9bc94a94b7636"} Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.126201 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711"} Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.126318 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.127863 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.127913 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.127927 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.129543 4825 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1ffb9515efa6c88c2a367aa9f6e46660aa5d40a7c70ad330ef7f706f18493da9" exitCode=0 Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.129761 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.129849 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.129894 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.130011 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.129620 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1ffb9515efa6c88c2a367aa9f6e46660aa5d40a7c70ad330ef7f706f18493da9"} Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.130404 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.131544 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.131572 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.131582 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.131600 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.131637 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.131657 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.132128 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.132262 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.132363 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.132897 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.132917 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.132928 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:39 crc kubenswrapper[4825]: E0219 00:07:39.269829 4825 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.207:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18957d2bacef3b33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:07:34.978583347 +0000 UTC m=+0.669549434,LastTimestamp:2026-02-19 00:07:34.978583347 +0000 UTC m=+0.669549434,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:07:39 crc kubenswrapper[4825]: I0219 00:07:39.991910 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 04:16:06.602596513 +0000 UTC Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.138547 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b7e246d824ffd8501597c1c42fc00a61a18957688d96291a4e69cd4bd8033304"} Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.139011 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9c9c7ca92a9aef8f55e86012761e8fa21f37f11c1b35ec242adb9d225b72c0e8"} Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.139036 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"aa76390f44634ac033afdc280a3a6d7dc7ad22f6a27e0d2028436be587331e6f"} Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.139055 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6d5455060a39bc50828900861466a48315ac23309be804f6462f2399449881f3"} Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.142005 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.146777 4825 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8489d08f008569c177df432bf2615fa68c190984abae736fafa9bc94a94b7636" exitCode=255 Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.146884 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8489d08f008569c177df432bf2615fa68c190984abae736fafa9bc94a94b7636"} Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.146938 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.147089 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.147695 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.148582 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.148624 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.148672 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.148693 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.148636 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.148781 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.149029 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.149057 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.149073 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.150012 4825 scope.go:117] "RemoveContainer" containerID="8489d08f008569c177df432bf2615fa68c190984abae736fafa9bc94a94b7636" Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.992610 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 03:17:50.490396734 +0000 UTC Feb 19 00:07:40 crc kubenswrapper[4825]: I0219 00:07:40.994817 4825 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.154355 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.157885 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611"} Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.158020 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.158119 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.159757 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.159840 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.159867 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.165911 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c66bdbf514b0e388f10749f473bda6644126662866a7c76a481f4ec2e046a0f9"} Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.166089 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.167631 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.167687 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.167707 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.452627 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.454855 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.454930 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.454956 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.455000 4825 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.492144 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:41 crc kubenswrapper[4825]: I0219 00:07:41.993595 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:39:47.915807288 +0000 UTC Feb 19 00:07:42 crc kubenswrapper[4825]: I0219 00:07:42.016956 4825 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 00:07:42 crc kubenswrapper[4825]: I0219 00:07:42.017062 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 00:07:42 crc kubenswrapper[4825]: I0219 00:07:42.169584 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:42 crc kubenswrapper[4825]: I0219 00:07:42.169660 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:42 crc kubenswrapper[4825]: I0219 00:07:42.169607 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:42 crc kubenswrapper[4825]: I0219 00:07:42.171678 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:42 crc kubenswrapper[4825]: I0219 00:07:42.171709 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:42 crc kubenswrapper[4825]: I0219 00:07:42.171744 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:42 crc kubenswrapper[4825]: I0219 00:07:42.171761 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:42 crc kubenswrapper[4825]: I0219 00:07:42.171772 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:42 crc kubenswrapper[4825]: I0219 00:07:42.171786 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:42 crc kubenswrapper[4825]: I0219 00:07:42.508970 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:42 crc kubenswrapper[4825]: I0219 00:07:42.676688 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 19 00:07:42 crc kubenswrapper[4825]: I0219 00:07:42.716349 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 19 00:07:42 crc kubenswrapper[4825]: I0219 00:07:42.994027 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 01:07:01.59990046 +0000 UTC Feb 19 00:07:43 crc kubenswrapper[4825]: I0219 00:07:43.172380 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:43 crc kubenswrapper[4825]: I0219 00:07:43.172671 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:43 crc kubenswrapper[4825]: I0219 00:07:43.173849 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:43 crc kubenswrapper[4825]: I0219 00:07:43.173893 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:43 crc kubenswrapper[4825]: I0219 00:07:43.173903 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:43 crc kubenswrapper[4825]: I0219 00:07:43.175013 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:43 crc kubenswrapper[4825]: I0219 00:07:43.175045 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:43 crc kubenswrapper[4825]: I0219 00:07:43.175053 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:43 crc kubenswrapper[4825]: I0219 00:07:43.994962 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 06:37:13.17714766 +0000 UTC Feb 19 00:07:44 crc kubenswrapper[4825]: I0219 00:07:44.175597 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:44 crc kubenswrapper[4825]: I0219 00:07:44.176300 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:44 crc kubenswrapper[4825]: I0219 00:07:44.176932 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:44 crc kubenswrapper[4825]: I0219 00:07:44.176991 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:44 crc kubenswrapper[4825]: I0219 00:07:44.177015 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:44 crc kubenswrapper[4825]: I0219 00:07:44.177836 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:44 crc kubenswrapper[4825]: I0219 00:07:44.178074 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:44 crc kubenswrapper[4825]: I0219 00:07:44.178275 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:44 crc kubenswrapper[4825]: I0219 00:07:44.995323 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:36:30.587464853 +0000 UTC Feb 19 00:07:45 crc kubenswrapper[4825]: E0219 00:07:45.146045 4825 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:07:45 crc kubenswrapper[4825]: I0219 00:07:45.996242 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 14:50:32.908966841 +0000 UTC Feb 19 00:07:46 crc kubenswrapper[4825]: I0219 00:07:46.997497 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 23:49:33.616606861 +0000 UTC Feb 19 00:07:47 crc kubenswrapper[4825]: I0219 00:07:47.288438 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:47 crc kubenswrapper[4825]: I0219 00:07:47.289039 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:47 crc kubenswrapper[4825]: I0219 00:07:47.290714 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:47 crc kubenswrapper[4825]: I0219 00:07:47.290765 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:47 crc kubenswrapper[4825]: I0219 00:07:47.290775 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:47 crc kubenswrapper[4825]: I0219 00:07:47.295629 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:47 crc kubenswrapper[4825]: I0219 00:07:47.998568 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:02:56.277753817 +0000 UTC Feb 19 00:07:48 crc kubenswrapper[4825]: I0219 00:07:48.187720 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:48 crc kubenswrapper[4825]: I0219 00:07:48.189075 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:48 crc kubenswrapper[4825]: I0219 00:07:48.189139 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:48 crc kubenswrapper[4825]: I0219 00:07:48.189156 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:48 crc kubenswrapper[4825]: I0219 00:07:48.195985 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:48 crc kubenswrapper[4825]: I0219 00:07:48.998925 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:54:40.638285178 +0000 UTC Feb 19 00:07:49 crc kubenswrapper[4825]: I0219 00:07:49.190563 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:49 crc kubenswrapper[4825]: I0219 00:07:49.191755 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:49 crc kubenswrapper[4825]: I0219 00:07:49.191814 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:49 crc kubenswrapper[4825]: I0219 00:07:49.191835 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:49 crc kubenswrapper[4825]: I0219 00:07:49.840856 4825 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 00:07:49 crc kubenswrapper[4825]: I0219 00:07:49.840938 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 00:07:49 crc kubenswrapper[4825]: I0219 00:07:49.846442 4825 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 00:07:49 crc kubenswrapper[4825]: I0219 00:07:49.846547 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 00:07:49 crc kubenswrapper[4825]: I0219 00:07:49.999221 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:41:55.096928823 +0000 UTC Feb 19 00:07:50 crc kubenswrapper[4825]: I0219 00:07:50.999554 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:48:59.336734867 +0000 UTC Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.001652 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 17:34:49.45470521 +0000 UTC Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.017841 4825 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.017928 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.519765 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.520069 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.520723 4825 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.520832 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.522354 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.522424 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.522443 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.527850 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.757614 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.758328 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.765692 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.765807 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.765839 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:52 crc kubenswrapper[4825]: I0219 00:07:52.783280 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 19 00:07:53 crc kubenswrapper[4825]: I0219 00:07:53.002796 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 10:29:29.971075261 +0000 UTC Feb 19 00:07:53 crc kubenswrapper[4825]: I0219 00:07:53.204900 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:53 crc kubenswrapper[4825]: I0219 00:07:53.205663 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:07:53 crc kubenswrapper[4825]: I0219 00:07:53.205950 4825 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 19 00:07:53 crc kubenswrapper[4825]: I0219 00:07:53.206075 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 19 00:07:53 crc kubenswrapper[4825]: I0219 00:07:53.206964 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:53 crc kubenswrapper[4825]: I0219 00:07:53.207023 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:53 crc kubenswrapper[4825]: I0219 00:07:53.207055 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:53 crc kubenswrapper[4825]: I0219 00:07:53.207998 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:53 crc kubenswrapper[4825]: I0219 00:07:53.208078 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:53 crc kubenswrapper[4825]: I0219 00:07:53.208101 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.003134 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:52:32.910274275 +0000 UTC Feb 19 00:07:54 crc kubenswrapper[4825]: E0219 00:07:54.803716 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.804675 4825 trace.go:236] Trace[759963361]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 00:07:43.866) (total time: 10938ms): Feb 19 00:07:54 crc kubenswrapper[4825]: Trace[759963361]: ---"Objects listed" error: 10938ms (00:07:54.804) Feb 19 00:07:54 crc kubenswrapper[4825]: Trace[759963361]: [10.938155399s] [10.938155399s] END Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.804710 4825 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.805888 4825 trace.go:236] Trace[1029841389]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 00:07:44.237) (total time: 10567ms): Feb 19 00:07:54 crc kubenswrapper[4825]: Trace[1029841389]: ---"Objects listed" error: 10567ms (00:07:54.805) Feb 19 00:07:54 crc kubenswrapper[4825]: Trace[1029841389]: [10.567991227s] [10.567991227s] END Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.805927 4825 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 00:07:54 crc kubenswrapper[4825]: E0219 00:07:54.807284 4825 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.807843 4825 trace.go:236] Trace[1458772582]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 00:07:44.587) (total time: 10220ms): Feb 19 00:07:54 crc kubenswrapper[4825]: Trace[1458772582]: ---"Objects listed" error: 10219ms (00:07:54.807) Feb 19 00:07:54 crc kubenswrapper[4825]: Trace[1458772582]: [10.220037819s] [10.220037819s] END Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.807875 4825 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.808965 4825 trace.go:236] Trace[125845206]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 00:07:41.472) (total time: 13336ms): Feb 19 00:07:54 crc kubenswrapper[4825]: Trace[125845206]: ---"Objects listed" error: 13336ms (00:07:54.808) Feb 19 00:07:54 crc kubenswrapper[4825]: Trace[125845206]: [13.336449202s] [13.336449202s] END Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.809251 4825 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.809698 4825 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.825135 4825 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.851686 4825 csr.go:261] certificate signing request csr-mb5l5 is approved, waiting to be issued Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.864036 4825 csr.go:257] certificate signing request csr-mb5l5 is issued Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.976855 4825 apiserver.go:52] "Watching apiserver" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.980943 4825 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.981362 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.981944 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.982004 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:54 crc kubenswrapper[4825]: E0219 00:07:54.982083 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.982149 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:54 crc kubenswrapper[4825]: E0219 00:07:54.982287 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.982167 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.982653 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.982756 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:07:54 crc kubenswrapper[4825]: E0219 00:07:54.982843 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.985488 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.987079 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.987237 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.991210 4825 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.992829 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.993063 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.997176 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.997487 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.997844 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 00:07:54 crc kubenswrapper[4825]: I0219 00:07:54.997047 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.003328 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 20:46:55.970606575 +0000 UTC Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.010533 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.010600 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.010629 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.010659 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.010686 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.010720 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.010756 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.010789 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.010822 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.010854 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.010882 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.010923 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.010946 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.010966 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.010990 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011015 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011036 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011065 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011362 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011392 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011419 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011474 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011544 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011573 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011603 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011634 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011663 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011672 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011693 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011772 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011800 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011821 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011839 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011864 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011884 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011904 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.011928 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012136 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012158 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012181 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012200 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012223 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012243 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012262 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012284 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012303 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012322 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012318 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012341 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012363 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012386 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012381 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012410 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012482 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012529 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012552 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012574 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012597 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012620 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012643 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012665 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012710 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012744 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012779 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012863 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012885 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012904 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012926 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012949 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.012978 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013001 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013025 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013046 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013069 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013097 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013124 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013150 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013182 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013209 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013237 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013259 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013282 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013286 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013308 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013337 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013363 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013387 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013412 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013455 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013479 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013525 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013551 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013581 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013607 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013641 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013674 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013700 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013734 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013762 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013789 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013818 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013846 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013875 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013904 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013931 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013958 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013987 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014018 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014075 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014105 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014138 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014167 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014198 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014227 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014255 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014278 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014298 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014319 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014342 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014362 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014382 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014402 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014421 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014440 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014458 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014477 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014501 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014540 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014561 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014582 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014609 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014653 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014682 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014704 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014789 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014809 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014828 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014852 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014869 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014888 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014907 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014924 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014964 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014983 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015009 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015027 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015047 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015081 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015103 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015125 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015154 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015179 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015196 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015215 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015233 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015252 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015269 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015288 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015310 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015333 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015364 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015383 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015403 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015422 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015438 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015458 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015488 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015516 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015534 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015553 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015569 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015586 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015610 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015656 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015680 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015699 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015719 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015737 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015757 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015789 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015817 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015839 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015866 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015891 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015914 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015931 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015947 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015972 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.015996 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016016 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016037 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016061 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016090 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016116 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016135 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016162 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016188 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016256 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016286 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016341 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016376 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016407 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016434 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016458 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016491 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.025685 4825 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.031022 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.031092 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.031118 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.031143 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.031165 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.031190 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.031215 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.031242 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.031333 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.031350 4825 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.031365 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.031377 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.039657 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013477 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.043811 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.013859 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014001 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014152 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.014300 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016924 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.016971 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.043996 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.044736 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.017109 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.017261 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.017754 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.017955 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.018163 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.018324 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.018677 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.018834 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.019076 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.019684 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.019851 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.020006 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.020260 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.020391 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.021533 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.021651 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.021730 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.021918 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.021938 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.028551 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.028676 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.028918 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.032683 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.034663 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.035039 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.037241 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.038826 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.038870 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.039544 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.041659 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.042837 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.043116 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.043155 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.043179 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.043300 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.043346 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.043522 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.043744 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.043962 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.044055 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.044252 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.044272 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.044280 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.044320 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.045217 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.045522 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.045692 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.045883 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.045976 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.045997 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.046243 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.046548 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.046579 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.046862 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.046889 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.047880 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.046969 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.047603 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.047995 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.047633 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.047679 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.048045 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.047997 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.048165 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.048208 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.048652 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.048712 4825 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.048777 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.048959 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:55.548937872 +0000 UTC m=+21.239903919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.049018 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.049032 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.049321 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.049728 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.050041 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.050163 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.050328 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.050798 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.050971 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.051241 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.051694 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:07:55.551666206 +0000 UTC m=+21.242632253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.053715 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.052214 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.052582 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.053285 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.053545 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.053904 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.052576 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.053979 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.054158 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.054579 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.054899 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.055042 4825 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.055136 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:55.555093979 +0000 UTC m=+21.246060026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.055167 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.055381 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.055389 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.055692 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.055783 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.056112 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.056234 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.056471 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.056618 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.056978 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.057149 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.057336 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.057357 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.057439 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.052775 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.057570 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.057813 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.057909 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.058088 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.058103 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.058201 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.058340 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.058439 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.058448 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.058717 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.058721 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.058779 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.059084 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.059177 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.059211 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.059500 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.059596 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.059765 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.059842 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.060095 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.060204 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.060554 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.060613 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.060850 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.060939 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.061084 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.061107 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.061557 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.061731 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.062043 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.062211 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.062280 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.062313 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.062641 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.062660 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.062827 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.063128 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.063268 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.063477 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.063700 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.063461 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.064119 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.064183 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.064525 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.064672 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.065023 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.065430 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.065829 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.065998 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.066752 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.066993 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.067005 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.067002 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.067030 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.067447 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.067580 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.067661 4825 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.067815 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:55.56778703 +0000 UTC m=+21.258753087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.068052 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.068597 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.069354 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.070586 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.070919 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.070943 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.070958 4825 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.071013 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:55.570995776 +0000 UTC m=+21.261961993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.071015 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.074642 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.075960 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.076396 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.076462 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.076638 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.076773 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.076905 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.077561 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.082793 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.082836 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.082807 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.082956 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.082958 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.083044 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.083092 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.083379 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.083458 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.083488 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.083620 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.086728 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.087259 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.087374 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.087402 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.087527 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.087983 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.088014 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.089078 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.093358 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.093672 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.094368 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.094656 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.095419 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.095988 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.097010 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.098989 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.099952 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.100711 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.103093 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.103823 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.106135 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.106137 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.106401 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.107551 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.109387 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.110574 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.111117 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.112683 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.116567 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.117455 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.118342 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.120630 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.121330 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.121896 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.125022 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.127352 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.128766 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.130930 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.131602 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133078 4825 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133196 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133231 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133270 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133356 4825 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133368 4825 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133378 4825 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133387 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133397 4825 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133407 4825 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133417 4825 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133426 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133435 4825 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133452 4825 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133461 4825 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133471 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133481 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133491 4825 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133517 4825 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133528 4825 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133537 4825 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133546 4825 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133555 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133564 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133573 4825 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133583 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133592 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133636 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133633 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133679 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133686 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133726 4825 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133737 4825 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133748 4825 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133758 4825 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133768 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133777 4825 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133787 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133797 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133807 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133817 4825 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133827 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133837 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133846 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133856 4825 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133865 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133919 4825 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133929 4825 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133941 4825 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133950 4825 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133960 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133968 4825 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133979 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133988 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.133998 4825 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134009 4825 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134019 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134027 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134037 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134047 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134057 4825 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134067 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134078 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134089 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134099 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134110 4825 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134120 4825 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134130 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134140 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134149 4825 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134158 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134168 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134181 4825 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134192 4825 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134232 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134261 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134271 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134284 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134296 4825 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134307 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134317 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134327 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134325 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134338 4825 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134374 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134389 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134402 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134413 4825 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134425 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134435 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134445 4825 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134456 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134466 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134478 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134488 4825 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134497 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134534 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134546 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134556 4825 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134565 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134576 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134585 4825 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134595 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134605 4825 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134614 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134623 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134632 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134641 4825 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134651 4825 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134662 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134674 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134684 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134694 4825 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134705 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134714 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134725 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134734 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134744 4825 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134754 4825 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134763 4825 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134772 4825 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134782 4825 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134792 4825 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134801 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134810 4825 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134819 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134848 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134867 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134880 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134892 4825 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134904 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134920 4825 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134936 4825 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134949 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134961 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134972 4825 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134985 4825 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.134997 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.135033 4825 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.135046 4825 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.135058 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.135327 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.135781 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.135989 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136005 4825 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136017 4825 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136028 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136038 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136048 4825 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136058 4825 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136071 4825 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136086 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136099 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136111 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136120 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136130 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136140 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136149 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136159 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136168 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136178 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136188 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136198 4825 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136209 4825 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136218 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136228 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136237 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136553 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136247 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136685 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136747 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136787 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136849 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136866 4825 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136926 4825 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136940 4825 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136951 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136962 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136971 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136980 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136985 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.136990 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137062 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137077 4825 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137094 4825 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137104 4825 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137115 4825 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137128 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137139 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137149 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137159 4825 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137168 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137179 4825 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137189 4825 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137198 4825 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137208 4825 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137222 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137232 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137242 4825 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.137254 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.138723 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.139920 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.140442 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.141702 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.142467 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.143002 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.144079 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.145137 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.145777 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.145934 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.146871 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.147454 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.148479 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.149418 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.150453 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.151020 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.151555 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.152759 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.153454 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.154057 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.159310 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.169038 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.179146 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.193054 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.203419 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.214206 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.238495 4825 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.238898 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.303653 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.320865 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:07:55 crc kubenswrapper[4825]: W0219 00:07:55.324434 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-7f19050cc9478d74810f3a1ecc43ef6cd828ccfd6799c726b76425c4ccd4b449 WatchSource:0}: Error finding container 7f19050cc9478d74810f3a1ecc43ef6cd828ccfd6799c726b76425c4ccd4b449: Status 404 returned error can't find the container with id 7f19050cc9478d74810f3a1ecc43ef6cd828ccfd6799c726b76425c4ccd4b449 Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.340821 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:07:55 crc kubenswrapper[4825]: W0219 00:07:55.356554 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-5c4866e635f4f8bc2247a15dd712eb6c91d8e342be81b2a8ec0dfd888be68aad WatchSource:0}: Error finding container 5c4866e635f4f8bc2247a15dd712eb6c91d8e342be81b2a8ec0dfd888be68aad: Status 404 returned error can't find the container with id 5c4866e635f4f8bc2247a15dd712eb6c91d8e342be81b2a8ec0dfd888be68aad Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.641440 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.641608 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.641704 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:07:56.641661823 +0000 UTC m=+22.332627880 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.641777 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.641839 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.641845 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.641870 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.641873 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.641909 4825 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.642000 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:56.64195468 +0000 UTC m=+22.332920737 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.642095 4825 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.642166 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.642203 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.642220 4825 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.642247 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:56.642212627 +0000 UTC m=+22.333178704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.642288 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:56.642267749 +0000 UTC m=+22.333233796 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.642651 4825 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:07:55 crc kubenswrapper[4825]: E0219 00:07:55.642825 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:56.642792093 +0000 UTC m=+22.333758170 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.865929 4825 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-19 00:02:54 +0000 UTC, rotation deadline is 2026-12-07 22:13:26.017270252 +0000 UTC Feb 19 00:07:55 crc kubenswrapper[4825]: I0219 00:07:55.866007 4825 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7006h5m30.151266116s for next certificate rotation Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.004129 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:45:10.363612801 +0000 UTC Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.215055 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66"} Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.215124 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557"} Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.215143 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a6712d35fcfa5ffb6006c4434a1f40beb5bbc87e02a2c9501747ffea9aea4c3e"} Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.217130 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7"} Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.217199 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7f19050cc9478d74810f3a1ecc43ef6cd828ccfd6799c726b76425c4ccd4b449"} Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.219205 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.219782 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.221834 4825 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611" exitCode=255 Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.221880 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611"} Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.221925 4825 scope.go:117] "RemoveContainer" containerID="8489d08f008569c177df432bf2615fa68c190984abae736fafa9bc94a94b7636" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.223053 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5c4866e635f4f8bc2247a15dd712eb6c91d8e342be81b2a8ec0dfd888be68aad"} Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.249790 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.249843 4825 scope.go:117] "RemoveContainer" containerID="a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611" Feb 19 00:07:56 crc kubenswrapper[4825]: E0219 00:07:56.250059 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.258308 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.273046 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.291490 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.307324 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.322538 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.338075 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.355873 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489d08f008569c177df432bf2615fa68c190984abae736fafa9bc94a94b7636\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:39Z\\\",\\\"message\\\":\\\"W0219 00:07:38.857574 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 00:07:38.857965 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771459658 cert, and key in /tmp/serving-cert-430603550/serving-signer.crt, /tmp/serving-cert-430603550/serving-signer.key\\\\nI0219 00:07:39.108859 1 observer_polling.go:159] Starting file observer\\\\nW0219 00:07:39.111826 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 00:07:39.112010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:39.112733 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-430603550/tls.crt::/tmp/serving-cert-430603550/tls.key\\\\\\\"\\\\nF0219 00:07:39.343150 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.371419 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.389934 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.404824 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.420655 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.437969 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.451803 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.525127 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-f526c"] Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.525585 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-f526c" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.527836 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.527990 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.530691 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.554441 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.573452 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.593270 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.616796 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.636827 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.651244 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.651320 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.651374 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.651402 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.651427 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/85963def-84d2-4e82-a252-8d8389151c81-hosts-file\") pod \"node-resolver-f526c\" (UID: \"85963def-84d2-4e82-a252-8d8389151c81\") " pod="openshift-dns/node-resolver-f526c" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.651446 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrl6l\" (UniqueName: \"kubernetes.io/projected/85963def-84d2-4e82-a252-8d8389151c81-kube-api-access-hrl6l\") pod \"node-resolver-f526c\" (UID: \"85963def-84d2-4e82-a252-8d8389151c81\") " pod="openshift-dns/node-resolver-f526c" Feb 19 00:07:56 crc kubenswrapper[4825]: E0219 00:07:56.651488 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:07:58.651454418 +0000 UTC m=+24.342420465 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.651557 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:56 crc kubenswrapper[4825]: E0219 00:07:56.651607 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:07:56 crc kubenswrapper[4825]: E0219 00:07:56.651625 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:07:56 crc kubenswrapper[4825]: E0219 00:07:56.651636 4825 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:56 crc kubenswrapper[4825]: E0219 00:07:56.651667 4825 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:07:56 crc kubenswrapper[4825]: E0219 00:07:56.651711 4825 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:07:56 crc kubenswrapper[4825]: E0219 00:07:56.651675 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:58.651662263 +0000 UTC m=+24.342628310 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:56 crc kubenswrapper[4825]: E0219 00:07:56.651833 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:58.651802188 +0000 UTC m=+24.342768405 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:07:56 crc kubenswrapper[4825]: E0219 00:07:56.651859 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:58.651847089 +0000 UTC m=+24.342813486 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:07:56 crc kubenswrapper[4825]: E0219 00:07:56.651957 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:07:56 crc kubenswrapper[4825]: E0219 00:07:56.651975 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:07:56 crc kubenswrapper[4825]: E0219 00:07:56.651991 4825 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:56 crc kubenswrapper[4825]: E0219 00:07:56.652049 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:58.652038864 +0000 UTC m=+24.343004921 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.662682 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.683425 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8489d08f008569c177df432bf2615fa68c190984abae736fafa9bc94a94b7636\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:39Z\\\",\\\"message\\\":\\\"W0219 00:07:38.857574 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0219 00:07:38.857965 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771459658 cert, and key in /tmp/serving-cert-430603550/serving-signer.crt, /tmp/serving-cert-430603550/serving-signer.key\\\\nI0219 00:07:39.108859 1 observer_polling.go:159] Starting file observer\\\\nW0219 00:07:39.111826 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0219 00:07:39.112010 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:39.112733 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-430603550/tls.crt::/tmp/serving-cert-430603550/tls.key\\\\\\\"\\\\nF0219 00:07:39.343150 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.697201 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.752386 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/85963def-84d2-4e82-a252-8d8389151c81-hosts-file\") pod \"node-resolver-f526c\" (UID: \"85963def-84d2-4e82-a252-8d8389151c81\") " pod="openshift-dns/node-resolver-f526c" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.752428 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrl6l\" (UniqueName: \"kubernetes.io/projected/85963def-84d2-4e82-a252-8d8389151c81-kube-api-access-hrl6l\") pod \"node-resolver-f526c\" (UID: \"85963def-84d2-4e82-a252-8d8389151c81\") " pod="openshift-dns/node-resolver-f526c" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.752619 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/85963def-84d2-4e82-a252-8d8389151c81-hosts-file\") pod \"node-resolver-f526c\" (UID: \"85963def-84d2-4e82-a252-8d8389151c81\") " pod="openshift-dns/node-resolver-f526c" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.792432 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrl6l\" (UniqueName: \"kubernetes.io/projected/85963def-84d2-4e82-a252-8d8389151c81-kube-api-access-hrl6l\") pod \"node-resolver-f526c\" (UID: \"85963def-84d2-4e82-a252-8d8389151c81\") " pod="openshift-dns/node-resolver-f526c" Feb 19 00:07:56 crc kubenswrapper[4825]: I0219 00:07:56.837997 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-f526c" Feb 19 00:07:56 crc kubenswrapper[4825]: W0219 00:07:56.854807 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85963def_84d2_4e82_a252_8d8389151c81.slice/crio-dbac76b1ae3371ad00e1eedeb8c4a84b7a5e8088f05ce6d557df09ae960d19a4 WatchSource:0}: Error finding container dbac76b1ae3371ad00e1eedeb8c4a84b7a5e8088f05ce6d557df09ae960d19a4: Status 404 returned error can't find the container with id dbac76b1ae3371ad00e1eedeb8c4a84b7a5e8088f05ce6d557df09ae960d19a4 Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.004954 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 21:36:12.087001466 +0000 UTC Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.066767 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.066913 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:57 crc kubenswrapper[4825]: E0219 00:07:57.066967 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.066782 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:57 crc kubenswrapper[4825]: E0219 00:07:57.067073 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:57 crc kubenswrapper[4825]: E0219 00:07:57.067245 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.070133 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.227082 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.229860 4825 scope.go:117] "RemoveContainer" containerID="a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611" Feb 19 00:07:57 crc kubenswrapper[4825]: E0219 00:07:57.230025 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.230564 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-f526c" event={"ID":"85963def-84d2-4e82-a252-8d8389151c81","Type":"ContainerStarted","Data":"acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134"} Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.230633 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-f526c" event={"ID":"85963def-84d2-4e82-a252-8d8389151c81","Type":"ContainerStarted","Data":"dbac76b1ae3371ad00e1eedeb8c4a84b7a5e8088f05ce6d557df09ae960d19a4"} Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.247108 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.262049 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.268456 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.287768 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.332491 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.357013 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-zfx7x"] Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.357366 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.357982 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-lb5zm"] Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.358515 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.361569 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-tggq9"] Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.362041 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:07:57 crc kubenswrapper[4825]: W0219 00:07:57.362441 4825 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 19 00:07:57 crc kubenswrapper[4825]: E0219 00:07:57.362478 4825 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 00:07:57 crc kubenswrapper[4825]: W0219 00:07:57.362538 4825 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 19 00:07:57 crc kubenswrapper[4825]: E0219 00:07:57.362552 4825 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.363098 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.363263 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.363396 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.363458 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.363536 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.363726 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.364934 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.365131 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.365251 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.365409 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.365931 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.404813 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.434389 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.459428 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/efe56e91-46ea-4365-8dc4-643fafea609a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.459475 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd6d1b9a-0fd9-43be-9ed5-7430e830b94f-mcd-auth-proxy-config\") pod \"machine-config-daemon-tggq9\" (UID: \"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\") " pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.459515 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-var-lib-kubelet\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.459543 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bd6d1b9a-0fd9-43be-9ed5-7430e830b94f-rootfs\") pod \"machine-config-daemon-tggq9\" (UID: \"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\") " pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.459561 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-run-netns\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.459617 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/efe56e91-46ea-4365-8dc4-643fafea609a-cnibin\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.459651 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/efe56e91-46ea-4365-8dc4-643fafea609a-os-release\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.459763 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2daa6777-c1b1-4fae-9c14-cfe10867288a-multus-daemon-config\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.459815 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/efe56e91-46ea-4365-8dc4-643fafea609a-system-cni-dir\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.459841 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-run-k8s-cni-cncf-io\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.459866 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-multus-socket-dir-parent\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.459890 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-multus-conf-dir\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.459974 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-cnibin\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.460030 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-var-lib-cni-multus\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.460050 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-run-multus-certs\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.460092 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-system-cni-dir\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.460118 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-etc-kubernetes\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.460151 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/efe56e91-46ea-4365-8dc4-643fafea609a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.460212 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/efe56e91-46ea-4365-8dc4-643fafea609a-cni-binary-copy\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.460234 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-multus-cni-dir\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.460253 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqrhz\" (UniqueName: \"kubernetes.io/projected/2daa6777-c1b1-4fae-9c14-cfe10867288a-kube-api-access-rqrhz\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.460314 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd6d1b9a-0fd9-43be-9ed5-7430e830b94f-proxy-tls\") pod \"machine-config-daemon-tggq9\" (UID: \"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\") " pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.460337 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2daa6777-c1b1-4fae-9c14-cfe10867288a-cni-binary-copy\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.460357 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-var-lib-cni-bin\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.460386 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-hostroot\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.460441 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-os-release\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.460472 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9899\" (UniqueName: \"kubernetes.io/projected/bd6d1b9a-0fd9-43be-9ed5-7430e830b94f-kube-api-access-h9899\") pod \"machine-config-daemon-tggq9\" (UID: \"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\") " pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.460491 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s5kg\" (UniqueName: \"kubernetes.io/projected/efe56e91-46ea-4365-8dc4-643fafea609a-kube-api-access-8s5kg\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.481463 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.500039 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.515443 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.530205 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.540402 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.554218 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.561841 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-os-release\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.561891 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s5kg\" (UniqueName: \"kubernetes.io/projected/efe56e91-46ea-4365-8dc4-643fafea609a-kube-api-access-8s5kg\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.561916 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9899\" (UniqueName: \"kubernetes.io/projected/bd6d1b9a-0fd9-43be-9ed5-7430e830b94f-kube-api-access-h9899\") pod \"machine-config-daemon-tggq9\" (UID: \"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\") " pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.561935 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/efe56e91-46ea-4365-8dc4-643fafea609a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.561954 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd6d1b9a-0fd9-43be-9ed5-7430e830b94f-mcd-auth-proxy-config\") pod \"machine-config-daemon-tggq9\" (UID: \"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\") " pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.561977 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-var-lib-kubelet\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.561997 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bd6d1b9a-0fd9-43be-9ed5-7430e830b94f-rootfs\") pod \"machine-config-daemon-tggq9\" (UID: \"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\") " pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562016 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-run-netns\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562036 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/efe56e91-46ea-4365-8dc4-643fafea609a-os-release\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562056 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2daa6777-c1b1-4fae-9c14-cfe10867288a-multus-daemon-config\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562076 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/efe56e91-46ea-4365-8dc4-643fafea609a-cnibin\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562095 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-run-k8s-cni-cncf-io\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562115 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/efe56e91-46ea-4365-8dc4-643fafea609a-system-cni-dir\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562134 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-multus-socket-dir-parent\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562157 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-multus-conf-dir\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562181 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-cnibin\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562202 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-var-lib-cni-multus\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562189 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bd6d1b9a-0fd9-43be-9ed5-7430e830b94f-rootfs\") pod \"machine-config-daemon-tggq9\" (UID: \"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\") " pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562266 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-run-multus-certs\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562259 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-var-lib-kubelet\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562222 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-run-multus-certs\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562309 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-run-netns\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562331 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-cnibin\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562354 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-run-k8s-cni-cncf-io\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562270 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/efe56e91-46ea-4365-8dc4-643fafea609a-cnibin\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562384 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-multus-socket-dir-parent\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562386 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-var-lib-cni-multus\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562417 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-multus-conf-dir\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562379 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-system-cni-dir\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562436 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/efe56e91-46ea-4365-8dc4-643fafea609a-system-cni-dir\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562443 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-os-release\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562480 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-etc-kubernetes\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562490 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-system-cni-dir\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562480 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/efe56e91-46ea-4365-8dc4-643fafea609a-os-release\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562521 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-etc-kubernetes\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562584 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/efe56e91-46ea-4365-8dc4-643fafea609a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562637 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/efe56e91-46ea-4365-8dc4-643fafea609a-cni-binary-copy\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562666 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-multus-cni-dir\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562687 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqrhz\" (UniqueName: \"kubernetes.io/projected/2daa6777-c1b1-4fae-9c14-cfe10867288a-kube-api-access-rqrhz\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562732 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd6d1b9a-0fd9-43be-9ed5-7430e830b94f-proxy-tls\") pod \"machine-config-daemon-tggq9\" (UID: \"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\") " pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562751 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2daa6777-c1b1-4fae-9c14-cfe10867288a-cni-binary-copy\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562769 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-var-lib-cni-bin\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562786 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-hostroot\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562845 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-host-var-lib-cni-bin\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562860 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-hostroot\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.562884 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2daa6777-c1b1-4fae-9c14-cfe10867288a-multus-cni-dir\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.563005 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/efe56e91-46ea-4365-8dc4-643fafea609a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.563302 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2daa6777-c1b1-4fae-9c14-cfe10867288a-multus-daemon-config\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.563471 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/efe56e91-46ea-4365-8dc4-643fafea609a-cni-binary-copy\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.563608 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bd6d1b9a-0fd9-43be-9ed5-7430e830b94f-mcd-auth-proxy-config\") pod \"machine-config-daemon-tggq9\" (UID: \"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\") " pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.563621 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2daa6777-c1b1-4fae-9c14-cfe10867288a-cni-binary-copy\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.567896 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bd6d1b9a-0fd9-43be-9ed5-7430e830b94f-proxy-tls\") pod \"machine-config-daemon-tggq9\" (UID: \"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\") " pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.573865 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.578910 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s5kg\" (UniqueName: \"kubernetes.io/projected/efe56e91-46ea-4365-8dc4-643fafea609a-kube-api-access-8s5kg\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.582253 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9899\" (UniqueName: \"kubernetes.io/projected/bd6d1b9a-0fd9-43be-9ed5-7430e830b94f-kube-api-access-h9899\") pod \"machine-config-daemon-tggq9\" (UID: \"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\") " pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.586346 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqrhz\" (UniqueName: \"kubernetes.io/projected/2daa6777-c1b1-4fae-9c14-cfe10867288a-kube-api-access-rqrhz\") pod \"multus-zfx7x\" (UID: \"2daa6777-c1b1-4fae-9c14-cfe10867288a\") " pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.589394 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.608020 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.623799 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.638598 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.649908 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.672659 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zfx7x" Feb 19 00:07:57 crc kubenswrapper[4825]: W0219 00:07:57.686835 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2daa6777_c1b1_4fae_9c14_cfe10867288a.slice/crio-dcb3199382dae6b7c3acbfac1176af60996be2d6e88f4fce798dcc0f58222520 WatchSource:0}: Error finding container dcb3199382dae6b7c3acbfac1176af60996be2d6e88f4fce798dcc0f58222520: Status 404 returned error can't find the container with id dcb3199382dae6b7c3acbfac1176af60996be2d6e88f4fce798dcc0f58222520 Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.690785 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:07:57 crc kubenswrapper[4825]: W0219 00:07:57.714298 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd6d1b9a_0fd9_43be_9ed5_7430e830b94f.slice/crio-07a0761486c10e6c87d0a22138a4d9ea3dc97fd4013b14ba85d3b1b13e9a8782 WatchSource:0}: Error finding container 07a0761486c10e6c87d0a22138a4d9ea3dc97fd4013b14ba85d3b1b13e9a8782: Status 404 returned error can't find the container with id 07a0761486c10e6c87d0a22138a4d9ea3dc97fd4013b14ba85d3b1b13e9a8782 Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.767456 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bdpln"] Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.768659 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.772072 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.772449 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.773390 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.774036 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.776084 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.776099 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.776263 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.790244 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.805266 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.823740 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.834321 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.850254 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.864922 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-kubelet\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.864966 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-slash\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865023 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865213 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-cni-bin\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865308 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-systemd-units\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865334 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-cni-netd\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865387 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-run-netns\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865414 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-var-lib-openvswitch\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865475 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-node-log\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865542 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc4zd\" (UniqueName: \"kubernetes.io/projected/0c24ef0e-b402-4585-a79a-6b98b9896f5a-kube-api-access-sc4zd\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865629 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-run-ovn-kubernetes\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865659 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-systemd\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865681 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-log-socket\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865738 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovn-node-metrics-cert\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865770 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-openvswitch\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865805 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovnkube-script-lib\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865833 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-ovn\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865862 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-etc-openvswitch\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865884 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovnkube-config\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.865924 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-env-overrides\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.869687 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.887012 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.901390 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.919861 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.933137 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.945482 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967271 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-slash\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967328 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967353 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-cni-bin\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967376 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-systemd-units\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967394 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-cni-netd\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967426 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-run-netns\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967446 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-var-lib-openvswitch\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967467 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-node-log\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967483 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-systemd-units\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967499 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc4zd\" (UniqueName: \"kubernetes.io/projected/0c24ef0e-b402-4585-a79a-6b98b9896f5a-kube-api-access-sc4zd\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967563 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967642 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-systemd\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967670 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-log-socket\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967644 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-node-log\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967715 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-log-socket\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967677 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-var-lib-openvswitch\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967563 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-run-netns\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967735 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-systemd\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967716 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-run-ovn-kubernetes\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967410 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-slash\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967694 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-run-ovn-kubernetes\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967894 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovn-node-metrics-cert\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967570 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-cni-netd\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967919 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-openvswitch\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967796 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-cni-bin\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967943 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovnkube-script-lib\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967967 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-ovn\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967986 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-etc-openvswitch\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.968006 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovnkube-config\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.968024 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-env-overrides\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.968047 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-kubelet\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.968093 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-kubelet\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.968122 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-ovn\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.968144 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-etc-openvswitch\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.967982 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-openvswitch\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.968095 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:57Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.968826 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-env-overrides\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.968980 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovnkube-config\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.969245 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovnkube-script-lib\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.973690 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovn-node-metrics-cert\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:57 crc kubenswrapper[4825]: I0219 00:07:57.983604 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc4zd\" (UniqueName: \"kubernetes.io/projected/0c24ef0e-b402-4585-a79a-6b98b9896f5a-kube-api-access-sc4zd\") pod \"ovnkube-node-bdpln\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.006137 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 16:19:42.321431325 +0000 UTC Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.082478 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.235332 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db"} Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.236974 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerID="1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78" exitCode=0 Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.237055 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerDied","Data":"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78"} Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.237228 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerStarted","Data":"4f065c9b79301d18b2d8b03bcf6c82fe34e626bb5200be38e719f5a91c57e5f1"} Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.239057 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerStarted","Data":"f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f"} Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.239109 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerStarted","Data":"e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e"} Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.239123 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerStarted","Data":"07a0761486c10e6c87d0a22138a4d9ea3dc97fd4013b14ba85d3b1b13e9a8782"} Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.240806 4825 scope.go:117] "RemoveContainer" containerID="a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611" Feb 19 00:07:58 crc kubenswrapper[4825]: E0219 00:07:58.240939 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.240803 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zfx7x" event={"ID":"2daa6777-c1b1-4fae-9c14-cfe10867288a","Type":"ContainerStarted","Data":"5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a"} Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.240974 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zfx7x" event={"ID":"2daa6777-c1b1-4fae-9c14-cfe10867288a","Type":"ContainerStarted","Data":"dcb3199382dae6b7c3acbfac1176af60996be2d6e88f4fce798dcc0f58222520"} Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.251847 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.266298 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.277713 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.286523 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.298178 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.316752 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.331893 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.347709 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.361432 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.383220 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.395885 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.407890 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.429770 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.446783 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.461714 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.475528 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.494536 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.510058 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.528551 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.541878 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.554545 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: E0219 00:07:58.563417 4825 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:07:58 crc kubenswrapper[4825]: E0219 00:07:58.563532 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/efe56e91-46ea-4365-8dc4-643fafea609a-cni-sysctl-allowlist podName:efe56e91-46ea-4365-8dc4-643fafea609a nodeName:}" failed. No retries permitted until 2026-02-19 00:07:59.063495808 +0000 UTC m=+24.754461855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/efe56e91-46ea-4365-8dc4-643fafea609a-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-lb5zm" (UID: "efe56e91-46ea-4365-8dc4-643fafea609a") : failed to sync configmap cache: timed out waiting for the condition Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.567149 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.580736 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.598480 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.611865 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:58Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.623535 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.675188 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:07:58 crc kubenswrapper[4825]: E0219 00:07:58.675385 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:02.675354696 +0000 UTC m=+28.366320743 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.675466 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.675517 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.675542 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:58 crc kubenswrapper[4825]: I0219 00:07:58.675561 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:58 crc kubenswrapper[4825]: E0219 00:07:58.675646 4825 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:07:58 crc kubenswrapper[4825]: E0219 00:07:58.675655 4825 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:07:58 crc kubenswrapper[4825]: E0219 00:07:58.675696 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:02.675687915 +0000 UTC m=+28.366653962 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:07:58 crc kubenswrapper[4825]: E0219 00:07:58.675713 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:07:58 crc kubenswrapper[4825]: E0219 00:07:58.675726 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:02.675706085 +0000 UTC m=+28.366672132 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:07:58 crc kubenswrapper[4825]: E0219 00:07:58.675732 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:07:58 crc kubenswrapper[4825]: E0219 00:07:58.675751 4825 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:58 crc kubenswrapper[4825]: E0219 00:07:58.675803 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:02.675789718 +0000 UTC m=+28.366755765 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:58 crc kubenswrapper[4825]: E0219 00:07:58.676118 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:07:58 crc kubenswrapper[4825]: E0219 00:07:58.676137 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:07:58 crc kubenswrapper[4825]: E0219 00:07:58.676152 4825 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:58 crc kubenswrapper[4825]: E0219 00:07:58.676194 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:02.676186208 +0000 UTC m=+28.367152255 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.007273 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:32:50.822487644 +0000 UTC Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.023275 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.029955 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.031744 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.038266 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.052760 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.065120 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.065195 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.065127 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:59 crc kubenswrapper[4825]: E0219 00:07:59.065303 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:59 crc kubenswrapper[4825]: E0219 00:07:59.065458 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:59 crc kubenswrapper[4825]: E0219 00:07:59.065679 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.072031 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.080445 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/efe56e91-46ea-4365-8dc4-643fafea609a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.081802 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/efe56e91-46ea-4365-8dc4-643fafea609a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lb5zm\" (UID: \"efe56e91-46ea-4365-8dc4-643fafea609a\") " pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.100862 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.132427 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.158802 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.181443 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.185682 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.196838 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: W0219 00:07:59.198921 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe56e91_46ea_4365_8dc4_643fafea609a.slice/crio-518b62c89d0f5ff1ba6120b14ca0a4f94b6e8c1045f8776dbef5bcce11ab7845 WatchSource:0}: Error finding container 518b62c89d0f5ff1ba6120b14ca0a4f94b6e8c1045f8776dbef5bcce11ab7845: Status 404 returned error can't find the container with id 518b62c89d0f5ff1ba6120b14ca0a4f94b6e8c1045f8776dbef5bcce11ab7845 Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.209042 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.229939 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.245194 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" event={"ID":"efe56e91-46ea-4365-8dc4-643fafea609a","Type":"ContainerStarted","Data":"518b62c89d0f5ff1ba6120b14ca0a4f94b6e8c1045f8776dbef5bcce11ab7845"} Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.249482 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerStarted","Data":"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc"} Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.249566 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerStarted","Data":"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2"} Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.249576 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerStarted","Data":"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e"} Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.249585 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerStarted","Data":"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a"} Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.254782 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerStarted","Data":"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5"} Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.257180 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.270905 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: E0219 00:07:59.271487 4825 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.288995 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.303430 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.319193 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.336855 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.350076 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.372045 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.384680 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.397329 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.409726 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.422190 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.445216 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.458251 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.471950 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.647035 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-vpm6d"] Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.647875 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vpm6d" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.650460 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.650868 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.651350 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.651379 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.667983 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.686672 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.698834 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.717459 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.733591 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.754738 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.767614 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.780347 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.788428 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d84md\" (UniqueName: \"kubernetes.io/projected/5a723093-6f53-4ca7-aa56-53ff684e90bd-kube-api-access-d84md\") pod \"node-ca-vpm6d\" (UID: \"5a723093-6f53-4ca7-aa56-53ff684e90bd\") " pod="openshift-image-registry/node-ca-vpm6d" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.788493 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a723093-6f53-4ca7-aa56-53ff684e90bd-host\") pod \"node-ca-vpm6d\" (UID: \"5a723093-6f53-4ca7-aa56-53ff684e90bd\") " pod="openshift-image-registry/node-ca-vpm6d" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.788537 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5a723093-6f53-4ca7-aa56-53ff684e90bd-serviceca\") pod \"node-ca-vpm6d\" (UID: \"5a723093-6f53-4ca7-aa56-53ff684e90bd\") " pod="openshift-image-registry/node-ca-vpm6d" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.795929 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.810682 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.822966 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.844118 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.859789 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.876708 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:59Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.889136 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d84md\" (UniqueName: \"kubernetes.io/projected/5a723093-6f53-4ca7-aa56-53ff684e90bd-kube-api-access-d84md\") pod \"node-ca-vpm6d\" (UID: \"5a723093-6f53-4ca7-aa56-53ff684e90bd\") " pod="openshift-image-registry/node-ca-vpm6d" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.889177 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a723093-6f53-4ca7-aa56-53ff684e90bd-host\") pod \"node-ca-vpm6d\" (UID: \"5a723093-6f53-4ca7-aa56-53ff684e90bd\") " pod="openshift-image-registry/node-ca-vpm6d" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.889207 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5a723093-6f53-4ca7-aa56-53ff684e90bd-serviceca\") pod \"node-ca-vpm6d\" (UID: \"5a723093-6f53-4ca7-aa56-53ff684e90bd\") " pod="openshift-image-registry/node-ca-vpm6d" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.889422 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a723093-6f53-4ca7-aa56-53ff684e90bd-host\") pod \"node-ca-vpm6d\" (UID: \"5a723093-6f53-4ca7-aa56-53ff684e90bd\") " pod="openshift-image-registry/node-ca-vpm6d" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.890148 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5a723093-6f53-4ca7-aa56-53ff684e90bd-serviceca\") pod \"node-ca-vpm6d\" (UID: \"5a723093-6f53-4ca7-aa56-53ff684e90bd\") " pod="openshift-image-registry/node-ca-vpm6d" Feb 19 00:07:59 crc kubenswrapper[4825]: I0219 00:07:59.920400 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d84md\" (UniqueName: \"kubernetes.io/projected/5a723093-6f53-4ca7-aa56-53ff684e90bd-kube-api-access-d84md\") pod \"node-ca-vpm6d\" (UID: \"5a723093-6f53-4ca7-aa56-53ff684e90bd\") " pod="openshift-image-registry/node-ca-vpm6d" Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.007767 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 09:19:22.062105218 +0000 UTC Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.029134 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vpm6d" Feb 19 00:08:00 crc kubenswrapper[4825]: W0219 00:08:00.048823 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a723093_6f53_4ca7_aa56_53ff684e90bd.slice/crio-55be30dc0b127ee6a9f9ea1e39d701fafc741e4968e6cd06bfcbeb53f9d0e1ca WatchSource:0}: Error finding container 55be30dc0b127ee6a9f9ea1e39d701fafc741e4968e6cd06bfcbeb53f9d0e1ca: Status 404 returned error can't find the container with id 55be30dc0b127ee6a9f9ea1e39d701fafc741e4968e6cd06bfcbeb53f9d0e1ca Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.257880 4825 generic.go:334] "Generic (PLEG): container finished" podID="efe56e91-46ea-4365-8dc4-643fafea609a" containerID="9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d" exitCode=0 Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.258019 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" event={"ID":"efe56e91-46ea-4365-8dc4-643fafea609a","Type":"ContainerDied","Data":"9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d"} Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.262744 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vpm6d" event={"ID":"5a723093-6f53-4ca7-aa56-53ff684e90bd","Type":"ContainerStarted","Data":"55be30dc0b127ee6a9f9ea1e39d701fafc741e4968e6cd06bfcbeb53f9d0e1ca"} Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.272097 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerStarted","Data":"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a"} Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.275306 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.289959 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.311586 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.325865 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.346706 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.358299 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.377278 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.393990 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.411128 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.426711 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.441752 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.454682 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.471121 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:00 crc kubenswrapper[4825]: I0219 00:08:00.516451 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:00Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.007904 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:57:58.698664981 +0000 UTC Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.065544 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:01 crc kubenswrapper[4825]: E0219 00:08:01.065716 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.065842 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.065869 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:01 crc kubenswrapper[4825]: E0219 00:08:01.066128 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:01 crc kubenswrapper[4825]: E0219 00:08:01.066266 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.207862 4825 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.210260 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.210306 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.210317 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.210468 4825 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.218611 4825 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.218833 4825 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.219916 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.219955 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.219965 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.219982 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.219994 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:01Z","lastTransitionTime":"2026-02-19T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:01 crc kubenswrapper[4825]: E0219 00:08:01.235028 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.238702 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.238762 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.238775 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.238795 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.238811 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:01Z","lastTransitionTime":"2026-02-19T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:01 crc kubenswrapper[4825]: E0219 00:08:01.250997 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.255910 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.255940 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.255949 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.255972 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.255982 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:01Z","lastTransitionTime":"2026-02-19T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:01 crc kubenswrapper[4825]: E0219 00:08:01.275734 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.276769 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vpm6d" event={"ID":"5a723093-6f53-4ca7-aa56-53ff684e90bd","Type":"ContainerStarted","Data":"10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86"} Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.278742 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" event={"ID":"efe56e91-46ea-4365-8dc4-643fafea609a","Type":"ContainerStarted","Data":"e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5"} Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.281333 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.281361 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.281373 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.281389 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.281402 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:01Z","lastTransitionTime":"2026-02-19T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.294047 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: E0219 00:08:01.296539 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.308131 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.308193 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.308207 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.308232 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.308248 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:01Z","lastTransitionTime":"2026-02-19T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.312006 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: E0219 00:08:01.323849 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: E0219 00:08:01.324051 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.326404 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.326450 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.326461 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.326485 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.326499 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:01Z","lastTransitionTime":"2026-02-19T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.331240 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.347713 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.362231 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.380919 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.395357 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.413560 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.426172 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.429375 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.429424 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.429437 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.429460 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.429472 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:01Z","lastTransitionTime":"2026-02-19T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.440983 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.454039 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.466404 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.485029 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.504098 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.524331 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.535568 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.535952 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.535977 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.535996 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.536011 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:01Z","lastTransitionTime":"2026-02-19T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.541770 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.563927 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.580431 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.598105 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.614282 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.632875 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.638492 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.638733 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.638882 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.638976 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.639054 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:01Z","lastTransitionTime":"2026-02-19T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.650487 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.664860 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.680357 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.699327 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.718707 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.733610 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.742618 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.742698 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.742715 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.742746 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.742765 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:01Z","lastTransitionTime":"2026-02-19T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.748815 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.845384 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.845428 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.845441 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.845460 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.845473 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:01Z","lastTransitionTime":"2026-02-19T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.948346 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.948420 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.948440 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.948472 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:01 crc kubenswrapper[4825]: I0219 00:08:01.948491 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:01Z","lastTransitionTime":"2026-02-19T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.009274 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 22:23:35.526184385 +0000 UTC Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.052212 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.052274 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.052297 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.052323 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.052345 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:02Z","lastTransitionTime":"2026-02-19T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.156263 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.156356 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.156380 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.156416 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.156446 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:02Z","lastTransitionTime":"2026-02-19T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.277982 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.278601 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.278630 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.278656 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.278687 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:02Z","lastTransitionTime":"2026-02-19T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.285996 4825 generic.go:334] "Generic (PLEG): container finished" podID="efe56e91-46ea-4365-8dc4-643fafea609a" containerID="e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5" exitCode=0 Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.286138 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" event={"ID":"efe56e91-46ea-4365-8dc4-643fafea609a","Type":"ContainerDied","Data":"e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5"} Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.298581 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerStarted","Data":"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705"} Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.328193 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.348176 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.375159 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.381670 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.381742 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.381764 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.381790 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.381813 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:02Z","lastTransitionTime":"2026-02-19T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.391792 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.405984 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.423025 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.435084 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.456095 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.471668 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.485189 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.485244 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.485265 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.485292 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.485315 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:02Z","lastTransitionTime":"2026-02-19T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.487037 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.518827 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.535985 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.552821 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.574431 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.588852 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.588928 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.588951 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.588978 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.588996 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:02Z","lastTransitionTime":"2026-02-19T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.693783 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.693837 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.693855 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.693884 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.693903 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:02Z","lastTransitionTime":"2026-02-19T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.757088 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:02 crc kubenswrapper[4825]: E0219 00:08:02.757274 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:10.757246638 +0000 UTC m=+36.448212685 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.757381 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.757436 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.757480 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.757550 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:02 crc kubenswrapper[4825]: E0219 00:08:02.757602 4825 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:08:02 crc kubenswrapper[4825]: E0219 00:08:02.757656 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:10.757647398 +0000 UTC m=+36.448613445 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:08:02 crc kubenswrapper[4825]: E0219 00:08:02.757748 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:08:02 crc kubenswrapper[4825]: E0219 00:08:02.757808 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:08:02 crc kubenswrapper[4825]: E0219 00:08:02.757835 4825 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:08:02 crc kubenswrapper[4825]: E0219 00:08:02.757748 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:08:02 crc kubenswrapper[4825]: E0219 00:08:02.757913 4825 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:08:02 crc kubenswrapper[4825]: E0219 00:08:02.757980 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:08:02 crc kubenswrapper[4825]: E0219 00:08:02.758230 4825 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:08:02 crc kubenswrapper[4825]: E0219 00:08:02.757938 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:10.757902855 +0000 UTC m=+36.448869072 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:08:02 crc kubenswrapper[4825]: E0219 00:08:02.758340 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:10.758320786 +0000 UTC m=+36.449286873 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:08:02 crc kubenswrapper[4825]: E0219 00:08:02.758377 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:10.758363497 +0000 UTC m=+36.449329574 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.798035 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.798122 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.798145 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.798184 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.798213 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:02Z","lastTransitionTime":"2026-02-19T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.901368 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.901444 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.901467 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.901490 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:02 crc kubenswrapper[4825]: I0219 00:08:02.901543 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:02Z","lastTransitionTime":"2026-02-19T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.004765 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.004840 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.004859 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.004890 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.004909 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:03Z","lastTransitionTime":"2026-02-19T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.010178 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 22:51:46.924865887 +0000 UTC Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.065316 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.065484 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:03 crc kubenswrapper[4825]: E0219 00:08:03.065624 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.065652 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:03 crc kubenswrapper[4825]: E0219 00:08:03.065834 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:03 crc kubenswrapper[4825]: E0219 00:08:03.066020 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.109240 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.109315 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.109343 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.109375 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.109396 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:03Z","lastTransitionTime":"2026-02-19T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.212502 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.212624 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.212651 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.212681 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.212701 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:03Z","lastTransitionTime":"2026-02-19T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.307645 4825 generic.go:334] "Generic (PLEG): container finished" podID="efe56e91-46ea-4365-8dc4-643fafea609a" containerID="05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af" exitCode=0 Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.307738 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" event={"ID":"efe56e91-46ea-4365-8dc4-643fafea609a","Type":"ContainerDied","Data":"05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af"} Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.317283 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.317353 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.317374 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.317409 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.317431 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:03Z","lastTransitionTime":"2026-02-19T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.325420 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.342316 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.371420 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.392013 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.413824 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.421841 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.421905 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.421925 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.421961 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.421981 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:03Z","lastTransitionTime":"2026-02-19T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.440406 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.461173 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.483638 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.503132 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.523647 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.526811 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.526853 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.526867 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.526891 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.526906 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:03Z","lastTransitionTime":"2026-02-19T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.544289 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.585592 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.608411 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.629897 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.630206 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.630269 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.630287 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.630322 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.630345 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:03Z","lastTransitionTime":"2026-02-19T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.734128 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.734628 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.734642 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.734668 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.734683 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:03Z","lastTransitionTime":"2026-02-19T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.837992 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.838030 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.838062 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.838082 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.838094 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:03Z","lastTransitionTime":"2026-02-19T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.941022 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.941083 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.941095 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.941118 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:03 crc kubenswrapper[4825]: I0219 00:08:03.941131 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:03Z","lastTransitionTime":"2026-02-19T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.010919 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:56:49.038200706 +0000 UTC Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.045878 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.045950 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.045967 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.045994 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.046015 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:04Z","lastTransitionTime":"2026-02-19T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.149464 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.149566 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.149585 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.149612 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.149631 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:04Z","lastTransitionTime":"2026-02-19T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.252887 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.252934 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.252951 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.252975 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.252995 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:04Z","lastTransitionTime":"2026-02-19T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.318989 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerStarted","Data":"e1333609bb458f42eb04d66cfa1604a6554a524d281dbc2f7e64b6da255bb26b"} Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.319490 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.323881 4825 generic.go:334] "Generic (PLEG): container finished" podID="efe56e91-46ea-4365-8dc4-643fafea609a" containerID="885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd" exitCode=0 Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.323939 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" event={"ID":"efe56e91-46ea-4365-8dc4-643fafea609a","Type":"ContainerDied","Data":"885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd"} Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.345688 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.357130 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.357165 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.357176 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.357195 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.357209 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:04Z","lastTransitionTime":"2026-02-19T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.369359 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.371063 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.400672 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1333609bb458f42eb04d66cfa1604a6554a524d281dbc2f7e64b6da255bb26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.418440 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.439685 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.459620 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.462229 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.462285 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.462300 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.462322 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.462338 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:04Z","lastTransitionTime":"2026-02-19T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.473561 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.489747 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.506019 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.525352 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.539649 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.554635 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.565586 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.565642 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.565654 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.565680 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.565697 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:04Z","lastTransitionTime":"2026-02-19T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.567946 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.583044 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.598266 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.612400 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.663040 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.669498 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.669557 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.669566 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.669584 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.669595 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:04Z","lastTransitionTime":"2026-02-19T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.682343 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.705264 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.720416 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.737482 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.756873 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.772092 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.772128 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.772138 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.772156 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.772168 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:04Z","lastTransitionTime":"2026-02-19T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.773984 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.790227 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.808491 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.808694 4825 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.875368 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.875407 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.875417 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.875433 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.875443 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:04Z","lastTransitionTime":"2026-02-19T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.978751 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.978796 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.978804 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.978820 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:04 crc kubenswrapper[4825]: I0219 00:08:04.978830 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:04Z","lastTransitionTime":"2026-02-19T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.012647 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:56:40.965161422 +0000 UTC Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.065060 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.065239 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:05 crc kubenswrapper[4825]: E0219 00:08:05.065305 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.065384 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:05 crc kubenswrapper[4825]: E0219 00:08:05.065632 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:05 crc kubenswrapper[4825]: E0219 00:08:05.065814 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.081493 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.081594 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.081613 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.081648 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.081670 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:05Z","lastTransitionTime":"2026-02-19T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.184258 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.184304 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.184318 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.184336 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.184350 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:05Z","lastTransitionTime":"2026-02-19T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.288193 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.288763 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.288824 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.288854 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.288916 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:05Z","lastTransitionTime":"2026-02-19T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.334650 4825 generic.go:334] "Generic (PLEG): container finished" podID="efe56e91-46ea-4365-8dc4-643fafea609a" containerID="2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a" exitCode=0 Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.334863 4825 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.335726 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" event={"ID":"efe56e91-46ea-4365-8dc4-643fafea609a","Type":"ContainerDied","Data":"2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a"} Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.335924 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.377845 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.392295 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.392346 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.392358 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.392382 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.392397 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:05Z","lastTransitionTime":"2026-02-19T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.498723 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.498804 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.498823 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.498854 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.498874 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:05Z","lastTransitionTime":"2026-02-19T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.602633 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.602742 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.602769 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.602801 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.602825 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:05Z","lastTransitionTime":"2026-02-19T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.706407 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.706462 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.706480 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.706542 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.706562 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:05Z","lastTransitionTime":"2026-02-19T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.810020 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.810076 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.810094 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.810120 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.810138 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:05Z","lastTransitionTime":"2026-02-19T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.832222 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.852321 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.897699 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1333609bb458f42eb04d66cfa1604a6554a524d281dbc2f7e64b6da255bb26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.913040 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.913086 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.913094 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.913112 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.913121 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:05Z","lastTransitionTime":"2026-02-19T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.917285 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.940557 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:05 crc kubenswrapper[4825]: I0219 00:08:05.980348 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1333609bb458f42eb04d66cfa1604a6554a524d281dbc2f7e64b6da255bb26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.000695 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.013256 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:26:34.918892199 +0000 UTC Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.015816 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.015867 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.015883 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.015909 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.015923 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:06Z","lastTransitionTime":"2026-02-19T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.025284 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.056287 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.081936 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.096620 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.109450 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.118347 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.118398 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.118429 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.118448 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.118461 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:06Z","lastTransitionTime":"2026-02-19T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.122599 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.140106 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.159231 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.176336 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.192212 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.205184 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.221347 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.221426 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.221440 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.221455 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.221470 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:06Z","lastTransitionTime":"2026-02-19T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.224198 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.243695 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.263286 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.288841 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.306128 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.322876 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.323616 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.323682 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.323696 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.323719 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.323732 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:06Z","lastTransitionTime":"2026-02-19T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.342468 4825 generic.go:334] "Generic (PLEG): container finished" podID="efe56e91-46ea-4365-8dc4-643fafea609a" containerID="cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193" exitCode=0 Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.342588 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" event={"ID":"efe56e91-46ea-4365-8dc4-643fafea609a","Type":"ContainerDied","Data":"cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193"} Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.342671 4825 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.348612 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1333609bb458f42eb04d66cfa1604a6554a524d281dbc2f7e64b6da255bb26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.367341 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.387021 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.404969 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.418796 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.429788 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.429880 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.429939 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.429966 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.430024 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:06Z","lastTransitionTime":"2026-02-19T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.433820 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.446931 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.465543 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.482254 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.518663 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1333609bb458f42eb04d66cfa1604a6554a524d281dbc2f7e64b6da255bb26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.533064 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.533113 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.533125 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.533146 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.533160 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:06Z","lastTransitionTime":"2026-02-19T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.540347 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.554969 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.574975 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.588905 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.602196 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.613848 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.631805 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.636241 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.636305 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.636319 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.636344 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.636361 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:06Z","lastTransitionTime":"2026-02-19T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.644819 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.661904 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.680046 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.695601 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.738689 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.738750 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.738766 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.738790 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.738804 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:06Z","lastTransitionTime":"2026-02-19T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.841855 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.841931 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.841951 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.841982 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.842001 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:06Z","lastTransitionTime":"2026-02-19T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.944732 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.944777 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.944791 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.944812 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:06 crc kubenswrapper[4825]: I0219 00:08:06.944826 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:06Z","lastTransitionTime":"2026-02-19T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.013831 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:16:43.292625683 +0000 UTC Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.048193 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.048240 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.048259 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.048312 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.048331 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:07Z","lastTransitionTime":"2026-02-19T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.068927 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:07 crc kubenswrapper[4825]: E0219 00:08:07.069099 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.069634 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:07 crc kubenswrapper[4825]: E0219 00:08:07.069737 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.069809 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:07 crc kubenswrapper[4825]: E0219 00:08:07.069888 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.151496 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.151592 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.151610 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.151636 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.151657 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:07Z","lastTransitionTime":"2026-02-19T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.255340 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.255418 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.255438 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.255472 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.255493 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:07Z","lastTransitionTime":"2026-02-19T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.351043 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovnkube-controller/0.log" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.355728 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerID="e1333609bb458f42eb04d66cfa1604a6554a524d281dbc2f7e64b6da255bb26b" exitCode=1 Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.355904 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerDied","Data":"e1333609bb458f42eb04d66cfa1604a6554a524d281dbc2f7e64b6da255bb26b"} Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.357049 4825 scope.go:117] "RemoveContainer" containerID="e1333609bb458f42eb04d66cfa1604a6554a524d281dbc2f7e64b6da255bb26b" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.358863 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.358965 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.358985 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.359016 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.359034 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:07Z","lastTransitionTime":"2026-02-19T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.362557 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" event={"ID":"efe56e91-46ea-4365-8dc4-643fafea609a","Type":"ContainerStarted","Data":"681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881"} Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.377982 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.401408 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.426461 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.443421 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.462892 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.462965 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.462982 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.463006 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.463020 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:07Z","lastTransitionTime":"2026-02-19T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.468633 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.487704 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.511043 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.537846 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.565624 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.567281 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.567340 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.567357 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.567382 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.567403 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:07Z","lastTransitionTime":"2026-02-19T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.584837 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.600922 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.613749 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.643588 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1333609bb458f42eb04d66cfa1604a6554a524d281dbc2f7e64b6da255bb26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1333609bb458f42eb04d66cfa1604a6554a524d281dbc2f7e64b6da255bb26b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"message\\\":\\\"00:08:06.715919 6074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:08:06.719630 6074 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 00:08:06.719648 6074 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 00:08:06.719668 6074 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:08:06.719685 6074 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:08:06.719701 6074 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:08:06.720328 6074 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:08:06.720342 6074 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:08:06.720417 6074 factory.go:656] Stopping watch factory\\\\nI0219 00:08:06.720436 6074 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:08:06.720473 6074 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:08:06.720489 6074 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:08:06.720499 6074 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:08:06.720534 6074 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:08:06.720549 6074 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:08:06.720548 6074 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.657746 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.671023 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.671067 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.671080 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.671101 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.671114 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:07Z","lastTransitionTime":"2026-02-19T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.676120 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.690227 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.710333 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.727191 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.752663 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.769559 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.775023 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.775075 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.775087 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.775106 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.775117 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:07Z","lastTransitionTime":"2026-02-19T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.784312 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.808222 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1333609bb458f42eb04d66cfa1604a6554a524d281dbc2f7e64b6da255bb26b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1333609bb458f42eb04d66cfa1604a6554a524d281dbc2f7e64b6da255bb26b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"message\\\":\\\"00:08:06.715919 6074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:08:06.719630 6074 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 00:08:06.719648 6074 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 00:08:06.719668 6074 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:08:06.719685 6074 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:08:06.719701 6074 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:08:06.720328 6074 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:08:06.720342 6074 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:08:06.720417 6074 factory.go:656] Stopping watch factory\\\\nI0219 00:08:06.720436 6074 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:08:06.720473 6074 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:08:06.720489 6074 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:08:06.720499 6074 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:08:06.720534 6074 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:08:06.720549 6074 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:08:06.720548 6074 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.827583 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.844796 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.864052 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.878697 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.878798 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.878812 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.878831 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.878844 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:07Z","lastTransitionTime":"2026-02-19T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.883486 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.900651 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.918488 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.982444 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.982493 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.982522 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.982546 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:07 crc kubenswrapper[4825]: I0219 00:08:07.982557 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:07Z","lastTransitionTime":"2026-02-19T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.014434 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:51:42.584389836 +0000 UTC Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.086327 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.086378 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.086639 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.086669 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.086681 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:08Z","lastTransitionTime":"2026-02-19T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.190056 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.190437 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.190562 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.190642 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.190698 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:08Z","lastTransitionTime":"2026-02-19T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.294926 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.294982 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.294996 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.295022 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.295036 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:08Z","lastTransitionTime":"2026-02-19T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.371731 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovnkube-controller/0.log" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.375936 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerStarted","Data":"e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf"} Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.376106 4825 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.397371 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.398022 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.398109 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.398122 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.398149 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.398164 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:08Z","lastTransitionTime":"2026-02-19T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.422477 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.437179 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.450558 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.467985 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.483353 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.496073 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.500756 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.500814 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.500823 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.500845 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.500859 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:08Z","lastTransitionTime":"2026-02-19T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.523777 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1333609bb458f42eb04d66cfa1604a6554a524d281dbc2f7e64b6da255bb26b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"message\\\":\\\"00:08:06.715919 6074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:08:06.719630 6074 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 00:08:06.719648 6074 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 00:08:06.719668 6074 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:08:06.719685 6074 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:08:06.719701 6074 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:08:06.720328 6074 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:08:06.720342 6074 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:08:06.720417 6074 factory.go:656] Stopping watch factory\\\\nI0219 00:08:06.720436 6074 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:08:06.720473 6074 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:08:06.720489 6074 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:08:06.720499 6074 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:08:06.720534 6074 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:08:06.720549 6074 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:08:06.720548 6074 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.542373 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.560104 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.581360 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.597660 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.603372 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.603466 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.603494 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.603585 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.603614 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:08Z","lastTransitionTime":"2026-02-19T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.617603 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.632798 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:08Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.706563 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.706760 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.706872 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.707008 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.707111 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:08Z","lastTransitionTime":"2026-02-19T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.810049 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.810217 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.810311 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.810402 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.810479 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:08Z","lastTransitionTime":"2026-02-19T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.923011 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.923094 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.923113 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.923142 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:08 crc kubenswrapper[4825]: I0219 00:08:08.923161 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:08Z","lastTransitionTime":"2026-02-19T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.015063 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 21:02:45.450509364 +0000 UTC Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.026326 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.026370 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.026383 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.026402 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.026415 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:09Z","lastTransitionTime":"2026-02-19T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.065132 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.065173 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:09 crc kubenswrapper[4825]: E0219 00:08:09.065256 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.065191 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:09 crc kubenswrapper[4825]: E0219 00:08:09.065482 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:09 crc kubenswrapper[4825]: E0219 00:08:09.065654 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.130308 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.130396 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.130410 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.130433 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.130448 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:09Z","lastTransitionTime":"2026-02-19T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.234462 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.234544 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.234557 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.234580 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.234595 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:09Z","lastTransitionTime":"2026-02-19T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.338278 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.338367 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.338392 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.338431 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.338458 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:09Z","lastTransitionTime":"2026-02-19T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.384482 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovnkube-controller/1.log" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.385914 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovnkube-controller/0.log" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.391792 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerID="e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf" exitCode=1 Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.391877 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerDied","Data":"e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf"} Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.391968 4825 scope.go:117] "RemoveContainer" containerID="e1333609bb458f42eb04d66cfa1604a6554a524d281dbc2f7e64b6da255bb26b" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.393375 4825 scope.go:117] "RemoveContainer" containerID="e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf" Feb 19 00:08:09 crc kubenswrapper[4825]: E0219 00:08:09.393762 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.421679 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.442639 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.442694 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.442711 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.442739 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.442754 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:09Z","lastTransitionTime":"2026-02-19T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.443727 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.466370 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.483908 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.500339 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.515209 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.528128 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.547247 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.547296 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.547313 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.547337 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.547353 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:09Z","lastTransitionTime":"2026-02-19T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.556001 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1333609bb458f42eb04d66cfa1604a6554a524d281dbc2f7e64b6da255bb26b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"message\\\":\\\"00:08:06.715919 6074 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:08:06.719630 6074 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 00:08:06.719648 6074 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 00:08:06.719668 6074 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:08:06.719685 6074 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:08:06.719701 6074 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:08:06.720328 6074 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:08:06.720342 6074 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:08:06.720417 6074 factory.go:656] Stopping watch factory\\\\nI0219 00:08:06.720436 6074 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:08:06.720473 6074 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:08:06.720489 6074 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:08:06.720499 6074 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:08:06.720534 6074 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:08:06.720549 6074 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:08:06.720548 6074 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:08Z\\\",\\\"message\\\":\\\"or removal\\\\nI0219 00:08:08.385724 6256 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:08:08.385912 6256 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:08:08.385972 6256 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:08:08.385981 6256 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:08:08.386046 6256 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:08:08.385925 6256 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:08:08.386157 6256 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:08:08.386243 6256 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 00:08:08.386284 6256 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:08:08.386359 6256 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 00:08:08.386176 6256 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:08:08.386428 6256 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:08:08.386483 6256 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:08:08.386550 6256 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:08:08.386726 6256 factory.go:656] Stopping watch factory\\\\nI0219 00:08:08.386774 6256 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.577316 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.592980 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.613216 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.628002 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.645645 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.651681 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.651734 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.651746 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.651766 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.651779 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:09Z","lastTransitionTime":"2026-02-19T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.663177 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:09Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.756369 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.756439 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.756456 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.756485 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.756503 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:09Z","lastTransitionTime":"2026-02-19T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.860358 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.860435 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.860453 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.860482 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.860535 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:09Z","lastTransitionTime":"2026-02-19T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.963876 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.963949 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.963964 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.963988 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:09 crc kubenswrapper[4825]: I0219 00:08:09.964006 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:09Z","lastTransitionTime":"2026-02-19T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.015694 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 20:13:38.781916416 +0000 UTC Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.068084 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.068139 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.068157 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.068183 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.068204 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:10Z","lastTransitionTime":"2026-02-19T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.172223 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.172295 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.172315 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.172344 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.172364 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:10Z","lastTransitionTime":"2026-02-19T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.276133 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.276216 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.276238 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.276266 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.276286 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:10Z","lastTransitionTime":"2026-02-19T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.379319 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.379383 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.379401 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.379427 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.379449 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:10Z","lastTransitionTime":"2026-02-19T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.397923 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovnkube-controller/1.log" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.403941 4825 scope.go:117] "RemoveContainer" containerID="e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf" Feb 19 00:08:10 crc kubenswrapper[4825]: E0219 00:08:10.404269 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.424693 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.457685 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:08Z\\\",\\\"message\\\":\\\"or removal\\\\nI0219 00:08:08.385724 6256 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:08:08.385912 6256 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:08:08.385972 6256 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:08:08.385981 6256 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:08:08.386046 6256 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:08:08.385925 6256 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:08:08.386157 6256 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:08:08.386243 6256 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 00:08:08.386284 6256 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:08:08.386359 6256 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 00:08:08.386176 6256 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:08:08.386428 6256 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:08:08.386483 6256 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:08:08.386550 6256 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:08:08.386726 6256 factory.go:656] Stopping watch factory\\\\nI0219 00:08:08.386774 6256 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.478820 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.482967 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.483019 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.483038 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.483063 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.483083 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:10Z","lastTransitionTime":"2026-02-19T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.501080 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.520962 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.561234 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.585396 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.586304 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.586346 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.586358 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.586375 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.586389 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:10Z","lastTransitionTime":"2026-02-19T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.589826 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7"] Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.590537 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.593093 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.594072 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.611813 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.626939 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.638882 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.650206 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.663488 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.673325 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vhfl7\" (UID: \"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.673379 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5f6r\" (UniqueName: \"kubernetes.io/projected/bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f-kube-api-access-c5f6r\") pod \"ovnkube-control-plane-749d76644c-vhfl7\" (UID: \"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.673404 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vhfl7\" (UID: \"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.673429 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vhfl7\" (UID: \"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.678925 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.690610 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.690642 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.690653 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.690670 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.690682 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:10Z","lastTransitionTime":"2026-02-19T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.695854 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.710855 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.725670 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.744857 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.760465 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.774761 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.774875 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vhfl7\" (UID: \"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.774943 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.775013 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:10 crc kubenswrapper[4825]: E0219 00:08:10.775096 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:26.775044506 +0000 UTC m=+52.466010593 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:10 crc kubenswrapper[4825]: E0219 00:08:10.775195 4825 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:08:10 crc kubenswrapper[4825]: E0219 00:08:10.775376 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:08:10 crc kubenswrapper[4825]: E0219 00:08:10.775494 4825 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.775422 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.776477 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vhfl7\" (UID: \"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" Feb 19 00:08:10 crc kubenswrapper[4825]: E0219 00:08:10.775767 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:26.775733004 +0000 UTC m=+52.466699161 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.776729 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vhfl7\" (UID: \"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" Feb 19 00:08:10 crc kubenswrapper[4825]: E0219 00:08:10.776456 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:08:10 crc kubenswrapper[4825]: E0219 00:08:10.776996 4825 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:08:10 crc kubenswrapper[4825]: E0219 00:08:10.776818 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:26.776792903 +0000 UTC m=+52.467759000 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.777173 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:10 crc kubenswrapper[4825]: E0219 00:08:10.777328 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:26.777306377 +0000 UTC m=+52.468272464 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:08:10 crc kubenswrapper[4825]: E0219 00:08:10.777366 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.777404 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5f6r\" (UniqueName: \"kubernetes.io/projected/bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f-kube-api-access-c5f6r\") pod \"ovnkube-control-plane-749d76644c-vhfl7\" (UID: \"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.777491 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vhfl7\" (UID: \"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" Feb 19 00:08:10 crc kubenswrapper[4825]: E0219 00:08:10.777411 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:08:10 crc kubenswrapper[4825]: E0219 00:08:10.778239 4825 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.778269 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vhfl7\" (UID: \"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" Feb 19 00:08:10 crc kubenswrapper[4825]: E0219 00:08:10.778340 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:26.778309634 +0000 UTC m=+52.469275721 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.780593 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.786174 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vhfl7\" (UID: \"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.792789 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.794892 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.794956 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.794971 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.794996 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.795013 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:10Z","lastTransitionTime":"2026-02-19T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.798360 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5f6r\" (UniqueName: \"kubernetes.io/projected/bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f-kube-api-access-c5f6r\") pod \"ovnkube-control-plane-749d76644c-vhfl7\" (UID: \"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.809859 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.830448 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.850594 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.873699 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.895993 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.898262 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.898323 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.898342 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.898369 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.898393 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:10Z","lastTransitionTime":"2026-02-19T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.908121 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.919693 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: W0219 00:08:10.928892 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf7738b0_f0ce_4b7c_85fb_0c4fc9ce443f.slice/crio-d18cd47d3e6b8063731d71cd0b0d0af1f652d8bb2763e36be400e0083de1ece0 WatchSource:0}: Error finding container d18cd47d3e6b8063731d71cd0b0d0af1f652d8bb2763e36be400e0083de1ece0: Status 404 returned error can't find the container with id d18cd47d3e6b8063731d71cd0b0d0af1f652d8bb2763e36be400e0083de1ece0 Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.944023 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.970188 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:08Z\\\",\\\"message\\\":\\\"or removal\\\\nI0219 00:08:08.385724 6256 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:08:08.385912 6256 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:08:08.385972 6256 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:08:08.385981 6256 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:08:08.386046 6256 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:08:08.385925 6256 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:08:08.386157 6256 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:08:08.386243 6256 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 00:08:08.386284 6256 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:08:08.386359 6256 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 00:08:08.386176 6256 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:08:08.386428 6256 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:08:08.386483 6256 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:08:08.386550 6256 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:08:08.386726 6256 factory.go:656] Stopping watch factory\\\\nI0219 00:08:08.386774 6256 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:10 crc kubenswrapper[4825]: I0219 00:08:10.984672 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.001836 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.001891 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.001904 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.001924 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.001939 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:11Z","lastTransitionTime":"2026-02-19T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.015821 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:27:01.263684095 +0000 UTC Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.065586 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.065592 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:11 crc kubenswrapper[4825]: E0219 00:08:11.065773 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.065616 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:11 crc kubenswrapper[4825]: E0219 00:08:11.066313 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:11 crc kubenswrapper[4825]: E0219 00:08:11.066490 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.106130 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.106185 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.106197 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.106217 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.106230 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:11Z","lastTransitionTime":"2026-02-19T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.209350 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.209404 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.209418 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.209436 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.209450 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:11Z","lastTransitionTime":"2026-02-19T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.311685 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.311725 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.311736 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.311756 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.311770 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:11Z","lastTransitionTime":"2026-02-19T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.410315 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" event={"ID":"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f","Type":"ContainerStarted","Data":"c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4"} Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.410378 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" event={"ID":"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f","Type":"ContainerStarted","Data":"d18cd47d3e6b8063731d71cd0b0d0af1f652d8bb2763e36be400e0083de1ece0"} Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.414172 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.414207 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.414219 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.414236 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.414250 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:11Z","lastTransitionTime":"2026-02-19T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.520915 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.521013 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.521040 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.521077 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.521105 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:11Z","lastTransitionTime":"2026-02-19T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.624905 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.624980 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.624996 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.625024 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.625042 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:11Z","lastTransitionTime":"2026-02-19T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.693165 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.693205 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.693215 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.693229 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.693238 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:11Z","lastTransitionTime":"2026-02-19T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:11 crc kubenswrapper[4825]: E0219 00:08:11.712395 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.717953 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.717993 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.718004 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.718024 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.718045 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:11Z","lastTransitionTime":"2026-02-19T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:11 crc kubenswrapper[4825]: E0219 00:08:11.738311 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.745119 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.745224 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.745248 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.745284 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.745311 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:11Z","lastTransitionTime":"2026-02-19T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:11 crc kubenswrapper[4825]: E0219 00:08:11.766156 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.769713 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.769746 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.769755 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.769769 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.769780 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:11Z","lastTransitionTime":"2026-02-19T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:11 crc kubenswrapper[4825]: E0219 00:08:11.780565 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.783998 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.784029 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.784039 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.784054 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.784065 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:11Z","lastTransitionTime":"2026-02-19T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:11 crc kubenswrapper[4825]: E0219 00:08:11.802193 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:11Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:11 crc kubenswrapper[4825]: E0219 00:08:11.802314 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.803940 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.803964 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.803972 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.803987 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.803998 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:11Z","lastTransitionTime":"2026-02-19T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.908645 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.908986 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.909081 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.909222 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:11 crc kubenswrapper[4825]: I0219 00:08:11.909313 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:11Z","lastTransitionTime":"2026-02-19T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.012123 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.012172 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.012185 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.012206 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.012220 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:12Z","lastTransitionTime":"2026-02-19T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.015967 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:01:43.931655247 +0000 UTC Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.112772 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bhnmw"] Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.113867 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:12 crc kubenswrapper[4825]: E0219 00:08:12.114023 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.116244 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.116312 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.116337 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.116383 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.116407 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:12Z","lastTransitionTime":"2026-02-19T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.143710 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.162773 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.195442 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs\") pod \"network-metrics-daemon-bhnmw\" (UID: \"80aa664d-e111-41f6-815d-f4185e1f72ff\") " pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.195649 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmbbq\" (UniqueName: \"kubernetes.io/projected/80aa664d-e111-41f6-815d-f4185e1f72ff-kube-api-access-vmbbq\") pod \"network-metrics-daemon-bhnmw\" (UID: \"80aa664d-e111-41f6-815d-f4185e1f72ff\") " pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.197425 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:08Z\\\",\\\"message\\\":\\\"or removal\\\\nI0219 00:08:08.385724 6256 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:08:08.385912 6256 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:08:08.385972 6256 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:08:08.385981 6256 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:08:08.386046 6256 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:08:08.385925 6256 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:08:08.386157 6256 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:08:08.386243 6256 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 00:08:08.386284 6256 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:08:08.386359 6256 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 00:08:08.386176 6256 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:08:08.386428 6256 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:08:08.386483 6256 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:08:08.386550 6256 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:08:08.386726 6256 factory.go:656] Stopping watch factory\\\\nI0219 00:08:08.386774 6256 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.215667 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.225879 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.225960 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.226037 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.226080 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.226107 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:12Z","lastTransitionTime":"2026-02-19T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.235255 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.252242 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.271962 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.289563 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.296482 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs\") pod \"network-metrics-daemon-bhnmw\" (UID: \"80aa664d-e111-41f6-815d-f4185e1f72ff\") " pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.296669 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmbbq\" (UniqueName: \"kubernetes.io/projected/80aa664d-e111-41f6-815d-f4185e1f72ff-kube-api-access-vmbbq\") pod \"network-metrics-daemon-bhnmw\" (UID: \"80aa664d-e111-41f6-815d-f4185e1f72ff\") " pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:12 crc kubenswrapper[4825]: E0219 00:08:12.296905 4825 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:08:12 crc kubenswrapper[4825]: E0219 00:08:12.297012 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs podName:80aa664d-e111-41f6-815d-f4185e1f72ff nodeName:}" failed. No retries permitted until 2026-02-19 00:08:12.796982884 +0000 UTC m=+38.487948961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs") pod "network-metrics-daemon-bhnmw" (UID: "80aa664d-e111-41f6-815d-f4185e1f72ff") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.309484 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.326568 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.328965 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.329010 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.329043 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.329082 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.329099 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:12Z","lastTransitionTime":"2026-02-19T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.332365 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmbbq\" (UniqueName: \"kubernetes.io/projected/80aa664d-e111-41f6-815d-f4185e1f72ff-kube-api-access-vmbbq\") pod \"network-metrics-daemon-bhnmw\" (UID: \"80aa664d-e111-41f6-815d-f4185e1f72ff\") " pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.351055 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.374388 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.396863 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.418851 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" event={"ID":"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f","Type":"ContainerStarted","Data":"95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e"} Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.419879 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.432110 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.432160 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.432175 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.432209 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.432227 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:12Z","lastTransitionTime":"2026-02-19T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.439876 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.457086 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.473958 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.491405 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.513440 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.534247 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.535845 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.535914 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.535933 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.535963 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.535984 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:12Z","lastTransitionTime":"2026-02-19T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.559151 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.584955 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.606631 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.628318 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.640147 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.640214 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.640232 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.640262 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.640280 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:12Z","lastTransitionTime":"2026-02-19T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.665137 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:08Z\\\",\\\"message\\\":\\\"or removal\\\\nI0219 00:08:08.385724 6256 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:08:08.385912 6256 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:08:08.385972 6256 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:08:08.385981 6256 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:08:08.386046 6256 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:08:08.385925 6256 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:08:08.386157 6256 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:08:08.386243 6256 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 00:08:08.386284 6256 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:08:08.386359 6256 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 00:08:08.386176 6256 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:08:08.386428 6256 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:08:08.386483 6256 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:08:08.386550 6256 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:08:08.386726 6256 factory.go:656] Stopping watch factory\\\\nI0219 00:08:08.386774 6256 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.686386 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.710775 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.731611 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.743568 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.743639 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.743660 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.743693 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.743714 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:12Z","lastTransitionTime":"2026-02-19T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.757092 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.776613 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.801120 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.802585 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs\") pod \"network-metrics-daemon-bhnmw\" (UID: \"80aa664d-e111-41f6-815d-f4185e1f72ff\") " pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:12 crc kubenswrapper[4825]: E0219 00:08:12.802861 4825 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:08:12 crc kubenswrapper[4825]: E0219 00:08:12.802950 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs podName:80aa664d-e111-41f6-815d-f4185e1f72ff nodeName:}" failed. No retries permitted until 2026-02-19 00:08:13.80292136 +0000 UTC m=+39.493887437 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs") pod "network-metrics-daemon-bhnmw" (UID: "80aa664d-e111-41f6-815d-f4185e1f72ff") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.813578 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.847094 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.847159 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.847181 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.847213 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.847234 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:12Z","lastTransitionTime":"2026-02-19T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.950687 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.950786 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.950806 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.950836 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:12 crc kubenswrapper[4825]: I0219 00:08:12.950857 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:12Z","lastTransitionTime":"2026-02-19T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.016118 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:36:16.216366438 +0000 UTC Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.054285 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.054364 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.054386 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.054416 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.054439 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:13Z","lastTransitionTime":"2026-02-19T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.065809 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.065906 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:13 crc kubenswrapper[4825]: E0219 00:08:13.066001 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.066021 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:13 crc kubenswrapper[4825]: E0219 00:08:13.066148 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:13 crc kubenswrapper[4825]: E0219 00:08:13.066245 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.067215 4825 scope.go:117] "RemoveContainer" containerID="a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.158030 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.158090 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.158103 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.158120 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.158138 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:13Z","lastTransitionTime":"2026-02-19T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.261832 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.261907 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.261925 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.261949 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.261971 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:13Z","lastTransitionTime":"2026-02-19T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.365127 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.365189 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.365208 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.365238 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.365258 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:13Z","lastTransitionTime":"2026-02-19T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.428423 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.436453 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9"} Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.437213 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.458316 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.468328 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.468376 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.468388 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.468406 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.468422 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:13Z","lastTransitionTime":"2026-02-19T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.478740 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.503234 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.520921 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.542551 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.559983 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.571728 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.571805 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.571821 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.571847 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.571862 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:13Z","lastTransitionTime":"2026-02-19T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.577348 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.599019 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.613815 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.632078 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.650888 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.667717 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.675319 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.675380 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.675403 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.675434 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.675457 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:13Z","lastTransitionTime":"2026-02-19T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.689026 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.708439 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.735962 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:08Z\\\",\\\"message\\\":\\\"or removal\\\\nI0219 00:08:08.385724 6256 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:08:08.385912 6256 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:08:08.385972 6256 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:08:08.385981 6256 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:08:08.386046 6256 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:08:08.385925 6256 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:08:08.386157 6256 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:08:08.386243 6256 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 00:08:08.386284 6256 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:08:08.386359 6256 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 00:08:08.386176 6256 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:08:08.386428 6256 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:08:08.386483 6256 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:08:08.386550 6256 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:08:08.386726 6256 factory.go:656] Stopping watch factory\\\\nI0219 00:08:08.386774 6256 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.755116 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.814973 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs\") pod \"network-metrics-daemon-bhnmw\" (UID: \"80aa664d-e111-41f6-815d-f4185e1f72ff\") " pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:13 crc kubenswrapper[4825]: E0219 00:08:13.815226 4825 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:08:13 crc kubenswrapper[4825]: E0219 00:08:13.815348 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs podName:80aa664d-e111-41f6-815d-f4185e1f72ff nodeName:}" failed. No retries permitted until 2026-02-19 00:08:15.815319045 +0000 UTC m=+41.506285102 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs") pod "network-metrics-daemon-bhnmw" (UID: "80aa664d-e111-41f6-815d-f4185e1f72ff") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.817752 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.817817 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.817833 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.817864 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.817881 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:13Z","lastTransitionTime":"2026-02-19T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.920550 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.920630 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.920651 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.920685 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:13 crc kubenswrapper[4825]: I0219 00:08:13.920708 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:13Z","lastTransitionTime":"2026-02-19T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.016837 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 01:22:49.146645898 +0000 UTC Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.024891 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.024995 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.025014 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.025048 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.025071 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:14Z","lastTransitionTime":"2026-02-19T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.065449 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:14 crc kubenswrapper[4825]: E0219 00:08:14.065743 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.129395 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.129471 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.129488 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.129545 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.129564 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:14Z","lastTransitionTime":"2026-02-19T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.233723 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.233789 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.233807 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.233836 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.233856 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:14Z","lastTransitionTime":"2026-02-19T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.337351 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.337441 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.337462 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.337490 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.337546 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:14Z","lastTransitionTime":"2026-02-19T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.441210 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.441282 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.441307 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.441337 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.441364 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:14Z","lastTransitionTime":"2026-02-19T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.547550 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.547621 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.547642 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.547679 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.547701 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:14Z","lastTransitionTime":"2026-02-19T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.651845 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.651899 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.651909 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.651927 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.651939 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:14Z","lastTransitionTime":"2026-02-19T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.754350 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.754436 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.754466 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.754501 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.754553 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:14Z","lastTransitionTime":"2026-02-19T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.857983 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.858062 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.858080 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.858109 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.858130 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:14Z","lastTransitionTime":"2026-02-19T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.962059 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.962141 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.962152 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.962172 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:14 crc kubenswrapper[4825]: I0219 00:08:14.962187 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:14Z","lastTransitionTime":"2026-02-19T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.017436 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 13:29:34.73294528 +0000 UTC Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.065013 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.065134 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.065237 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:15 crc kubenswrapper[4825]: E0219 00:08:15.065350 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:15 crc kubenswrapper[4825]: E0219 00:08:15.065636 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:15 crc kubenswrapper[4825]: E0219 00:08:15.065880 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.066703 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.066781 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.066812 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.066852 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.066878 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:15Z","lastTransitionTime":"2026-02-19T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.094422 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:15Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.115362 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:15Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.138840 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:15Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.173437 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.173576 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.173607 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.173641 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.173672 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:15Z","lastTransitionTime":"2026-02-19T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.182908 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:15Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.219876 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:15Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.237150 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:15Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.253025 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:15Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.269575 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:15Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.275557 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.275632 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.275654 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.275678 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.275697 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:15Z","lastTransitionTime":"2026-02-19T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.284825 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:15Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.299475 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:15Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.316639 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:15Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.331759 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:15Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.357789 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:08Z\\\",\\\"message\\\":\\\"or removal\\\\nI0219 00:08:08.385724 6256 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:08:08.385912 6256 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:08:08.385972 6256 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:08:08.385981 6256 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:08:08.386046 6256 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:08:08.385925 6256 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:08:08.386157 6256 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:08:08.386243 6256 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 00:08:08.386284 6256 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:08:08.386359 6256 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 00:08:08.386176 6256 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:08:08.386428 6256 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:08:08.386483 6256 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:08:08.386550 6256 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:08:08.386726 6256 factory.go:656] Stopping watch factory\\\\nI0219 00:08:08.386774 6256 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:15Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.376075 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:15Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.378333 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.378371 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.378382 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.378402 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.378415 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:15Z","lastTransitionTime":"2026-02-19T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.395382 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:15Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.412132 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:15Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.480899 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.480988 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.481015 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.481049 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.481073 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:15Z","lastTransitionTime":"2026-02-19T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.584801 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.584861 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.584879 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.584903 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.584921 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:15Z","lastTransitionTime":"2026-02-19T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.688448 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.688558 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.688573 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.688595 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.688609 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:15Z","lastTransitionTime":"2026-02-19T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.791365 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.791427 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.791441 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.791460 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.791470 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:15Z","lastTransitionTime":"2026-02-19T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.839167 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs\") pod \"network-metrics-daemon-bhnmw\" (UID: \"80aa664d-e111-41f6-815d-f4185e1f72ff\") " pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:15 crc kubenswrapper[4825]: E0219 00:08:15.839864 4825 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:08:15 crc kubenswrapper[4825]: E0219 00:08:15.840010 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs podName:80aa664d-e111-41f6-815d-f4185e1f72ff nodeName:}" failed. No retries permitted until 2026-02-19 00:08:19.839966093 +0000 UTC m=+45.530932220 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs") pod "network-metrics-daemon-bhnmw" (UID: "80aa664d-e111-41f6-815d-f4185e1f72ff") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.895355 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.895430 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.895454 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.895482 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.895542 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:15Z","lastTransitionTime":"2026-02-19T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.999668 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.999774 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:15 crc kubenswrapper[4825]: I0219 00:08:15.999801 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:15.999829 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:15.999856 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:15Z","lastTransitionTime":"2026-02-19T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.017879 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 21:15:53.26126032 +0000 UTC Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.065471 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:16 crc kubenswrapper[4825]: E0219 00:08:16.065767 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.103632 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.103716 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.103735 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.104240 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.104290 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:16Z","lastTransitionTime":"2026-02-19T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.208458 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.208589 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.208614 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.208642 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.208666 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:16Z","lastTransitionTime":"2026-02-19T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.312428 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.312540 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.312561 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.312588 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.312608 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:16Z","lastTransitionTime":"2026-02-19T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.415808 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.415853 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.415865 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.415882 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.415892 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:16Z","lastTransitionTime":"2026-02-19T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.519206 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.519272 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.519290 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.519319 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.519339 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:16Z","lastTransitionTime":"2026-02-19T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.623087 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.623164 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.623181 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.623208 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.623230 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:16Z","lastTransitionTime":"2026-02-19T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.726668 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.726732 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.726747 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.726769 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.726789 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:16Z","lastTransitionTime":"2026-02-19T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.829575 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.829627 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.829644 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.829667 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.829680 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:16Z","lastTransitionTime":"2026-02-19T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.932798 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.933174 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.933320 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.933406 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:16 crc kubenswrapper[4825]: I0219 00:08:16.933496 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:16Z","lastTransitionTime":"2026-02-19T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.018329 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:05:01.630800717 +0000 UTC Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.037683 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.037745 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.037762 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.037787 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.037809 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:17Z","lastTransitionTime":"2026-02-19T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.065834 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.066071 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:17 crc kubenswrapper[4825]: E0219 00:08:17.066386 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.066684 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:17 crc kubenswrapper[4825]: E0219 00:08:17.066855 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:17 crc kubenswrapper[4825]: E0219 00:08:17.067201 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.140718 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.141152 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.141347 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.141501 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.141687 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:17Z","lastTransitionTime":"2026-02-19T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.244746 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.244822 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.244841 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.244870 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.244890 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:17Z","lastTransitionTime":"2026-02-19T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.348239 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.348300 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.348317 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.348341 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.348354 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:17Z","lastTransitionTime":"2026-02-19T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.451340 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.451395 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.451407 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.451427 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.451440 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:17Z","lastTransitionTime":"2026-02-19T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.555138 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.555200 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.555221 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.555245 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.555264 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:17Z","lastTransitionTime":"2026-02-19T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.658697 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.658756 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.658774 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.658801 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.658821 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:17Z","lastTransitionTime":"2026-02-19T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.763024 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.763085 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.763104 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.763132 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.763153 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:17Z","lastTransitionTime":"2026-02-19T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.867573 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.867646 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.867665 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.867696 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.867718 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:17Z","lastTransitionTime":"2026-02-19T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.970566 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.970620 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.970636 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.970660 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:17 crc kubenswrapper[4825]: I0219 00:08:17.970675 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:17Z","lastTransitionTime":"2026-02-19T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.019376 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:08:50.121244072 +0000 UTC Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.065216 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:18 crc kubenswrapper[4825]: E0219 00:08:18.065443 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.074612 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.074694 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.074719 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.074745 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.074766 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:18Z","lastTransitionTime":"2026-02-19T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.178258 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.178337 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.178356 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.178388 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.178411 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:18Z","lastTransitionTime":"2026-02-19T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.281536 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.281613 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.281636 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.281668 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.281690 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:18Z","lastTransitionTime":"2026-02-19T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.385100 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.385178 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.385199 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.385226 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.385252 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:18Z","lastTransitionTime":"2026-02-19T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.488957 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.489055 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.489085 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.489122 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.489148 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:18Z","lastTransitionTime":"2026-02-19T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.591935 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.592003 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.592018 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.592044 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.592063 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:18Z","lastTransitionTime":"2026-02-19T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.695233 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.695282 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.695293 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.695308 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.695322 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:18Z","lastTransitionTime":"2026-02-19T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.798492 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.798610 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.798631 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.798658 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.798679 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:18Z","lastTransitionTime":"2026-02-19T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.838302 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.839821 4825 scope.go:117] "RemoveContainer" containerID="e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.902878 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.903630 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.903677 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.903717 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:18 crc kubenswrapper[4825]: I0219 00:08:18.903745 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:18Z","lastTransitionTime":"2026-02-19T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.007208 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.007269 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.007287 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.007316 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.007336 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:19Z","lastTransitionTime":"2026-02-19T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.020444 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:25:33.568190868 +0000 UTC Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.065462 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.065536 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:19 crc kubenswrapper[4825]: E0219 00:08:19.065686 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.065756 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:19 crc kubenswrapper[4825]: E0219 00:08:19.065991 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:19 crc kubenswrapper[4825]: E0219 00:08:19.066226 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.112459 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.112552 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.112572 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.112597 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.112619 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:19Z","lastTransitionTime":"2026-02-19T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.216217 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.216299 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.216333 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.216370 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.216401 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:19Z","lastTransitionTime":"2026-02-19T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.319632 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.319697 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.319720 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.319759 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.319779 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:19Z","lastTransitionTime":"2026-02-19T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.422443 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.422525 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.422542 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.422569 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.422584 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:19Z","lastTransitionTime":"2026-02-19T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.463996 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovnkube-controller/1.log" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.467212 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerStarted","Data":"6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e"} Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.467722 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.497886 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.516902 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.525853 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.525897 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.525908 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.525953 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.525965 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:19Z","lastTransitionTime":"2026-02-19T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.532440 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.547085 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.566751 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.583193 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.599365 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.676359 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.678080 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.678113 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.678122 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.678138 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.678149 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:19Z","lastTransitionTime":"2026-02-19T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.710730 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:08Z\\\",\\\"message\\\":\\\"or removal\\\\nI0219 00:08:08.385724 6256 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:08:08.385912 6256 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:08:08.385972 6256 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:08:08.385981 6256 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:08:08.386046 6256 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:08:08.385925 6256 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:08:08.386157 6256 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:08:08.386243 6256 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 00:08:08.386284 6256 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:08:08.386359 6256 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 00:08:08.386176 6256 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:08:08.386428 6256 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:08:08.386483 6256 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:08:08.386550 6256 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:08:08.386726 6256 factory.go:656] Stopping watch factory\\\\nI0219 00:08:08.386774 6256 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.726377 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.741300 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.753202 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.768290 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.781442 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.781560 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.781589 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.781628 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.781655 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:19Z","lastTransitionTime":"2026-02-19T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.784607 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.798555 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.812912 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.884425 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.884529 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.884555 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.884584 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.884604 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:19Z","lastTransitionTime":"2026-02-19T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.891493 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs\") pod \"network-metrics-daemon-bhnmw\" (UID: \"80aa664d-e111-41f6-815d-f4185e1f72ff\") " pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:19 crc kubenswrapper[4825]: E0219 00:08:19.891720 4825 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:08:19 crc kubenswrapper[4825]: E0219 00:08:19.891842 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs podName:80aa664d-e111-41f6-815d-f4185e1f72ff nodeName:}" failed. No retries permitted until 2026-02-19 00:08:27.891810836 +0000 UTC m=+53.582776913 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs") pod "network-metrics-daemon-bhnmw" (UID: "80aa664d-e111-41f6-815d-f4185e1f72ff") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.987184 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.987232 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.987245 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.987263 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:19 crc kubenswrapper[4825]: I0219 00:08:19.987275 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:19Z","lastTransitionTime":"2026-02-19T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.021827 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:32:05.141553119 +0000 UTC Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.065284 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:20 crc kubenswrapper[4825]: E0219 00:08:20.065487 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.089648 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.089702 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.089713 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.089732 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.089743 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:20Z","lastTransitionTime":"2026-02-19T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.192940 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.192991 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.193003 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.193025 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.193038 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:20Z","lastTransitionTime":"2026-02-19T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.296939 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.297012 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.297034 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.297064 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.297087 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:20Z","lastTransitionTime":"2026-02-19T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.400439 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.400544 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.400565 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.400591 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.400632 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:20Z","lastTransitionTime":"2026-02-19T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.475218 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovnkube-controller/2.log" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.476251 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovnkube-controller/1.log" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.482079 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerID="6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e" exitCode=1 Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.482829 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerDied","Data":"6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e"} Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.483108 4825 scope.go:117] "RemoveContainer" containerID="e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.483624 4825 scope.go:117] "RemoveContainer" containerID="6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e" Feb 19 00:08:20 crc kubenswrapper[4825]: E0219 00:08:20.483998 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.504317 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.504383 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.504406 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.504439 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.504464 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:20Z","lastTransitionTime":"2026-02-19T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.516166 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.532968 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.548946 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.566359 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.591751 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.608679 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.608742 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.608764 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.608795 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.608820 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:20Z","lastTransitionTime":"2026-02-19T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.613291 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.633376 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.655990 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.677734 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.699314 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.712471 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.712589 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.712619 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.712648 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.712667 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:20Z","lastTransitionTime":"2026-02-19T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.729563 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.751821 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.786105 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3465527e385923dd93f6d04a4757e767739efb42fa59954fcc0c50a4f0a36bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:08Z\\\",\\\"message\\\":\\\"or removal\\\\nI0219 00:08:08.385724 6256 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:08:08.385912 6256 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:08:08.385972 6256 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:08:08.385981 6256 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:08:08.386046 6256 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:08:08.385925 6256 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:08:08.386157 6256 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:08:08.386243 6256 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 00:08:08.386284 6256 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:08:08.386359 6256 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 00:08:08.386176 6256 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:08:08.386428 6256 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:08:08.386483 6256 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:08:08.386550 6256 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:08:08.386726 6256 factory.go:656] Stopping watch factory\\\\nI0219 00:08:08.386774 6256 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:19Z\\\",\\\"message\\\":\\\"ntity-vrzqb\\\\nI0219 00:08:19.962159 6491 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0219 00:08:19.962179 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:08:19.962140 6491 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/mac\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.811180 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.822292 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.822428 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.822460 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.822500 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.822589 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:20Z","lastTransitionTime":"2026-02-19T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.835555 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.852472 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.926144 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.926188 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.926200 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.926224 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:20 crc kubenswrapper[4825]: I0219 00:08:20.926240 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:20Z","lastTransitionTime":"2026-02-19T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.023089 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:07:15.530369779 +0000 UTC Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.031150 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.031207 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.031220 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.031244 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.031257 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:21Z","lastTransitionTime":"2026-02-19T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.065994 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.066231 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:21 crc kubenswrapper[4825]: E0219 00:08:21.066398 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.066419 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:21 crc kubenswrapper[4825]: E0219 00:08:21.066667 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:21 crc kubenswrapper[4825]: E0219 00:08:21.066839 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.135371 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.135919 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.136056 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.136202 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.136349 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:21Z","lastTransitionTime":"2026-02-19T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.240416 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.240497 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.240544 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.240578 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.240660 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:21Z","lastTransitionTime":"2026-02-19T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.343999 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.344071 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.344091 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.344122 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.344143 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:21Z","lastTransitionTime":"2026-02-19T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.448499 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.448584 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.448640 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.448666 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.448686 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:21Z","lastTransitionTime":"2026-02-19T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.499061 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovnkube-controller/2.log" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.507338 4825 scope.go:117] "RemoveContainer" containerID="6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e" Feb 19 00:08:21 crc kubenswrapper[4825]: E0219 00:08:21.507707 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.531077 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.552052 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.552124 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.552142 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.552167 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.552188 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:21Z","lastTransitionTime":"2026-02-19T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.552418 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.572000 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.594595 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:19Z\\\",\\\"message\\\":\\\"ntity-vrzqb\\\\nI0219 00:08:19.962159 6491 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0219 00:08:19.962179 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:08:19.962140 6491 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/mac\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.615619 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.639025 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.655201 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.655269 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.655287 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.655311 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.655331 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:21Z","lastTransitionTime":"2026-02-19T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.662248 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.678615 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.695892 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.710340 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.728237 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.745566 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.758769 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.758811 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.758823 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.758843 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.758857 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:21Z","lastTransitionTime":"2026-02-19T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.767694 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.781651 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.796904 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.811496 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.843628 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.843700 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.843727 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.843758 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.843782 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:21Z","lastTransitionTime":"2026-02-19T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:21 crc kubenswrapper[4825]: E0219 00:08:21.861862 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.866599 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.866644 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.866657 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.866682 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.866697 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:21Z","lastTransitionTime":"2026-02-19T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:21 crc kubenswrapper[4825]: E0219 00:08:21.884818 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.889526 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.889573 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.889593 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.889618 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.889633 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:21Z","lastTransitionTime":"2026-02-19T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:21 crc kubenswrapper[4825]: E0219 00:08:21.906932 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.917961 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.918029 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.918051 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.918083 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.918111 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:21Z","lastTransitionTime":"2026-02-19T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:21 crc kubenswrapper[4825]: E0219 00:08:21.940284 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.945478 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.945609 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.945675 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.945742 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.945809 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:21Z","lastTransitionTime":"2026-02-19T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:21 crc kubenswrapper[4825]: E0219 00:08:21.965683 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:21 crc kubenswrapper[4825]: E0219 00:08:21.965849 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.967795 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.967857 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.967878 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.967913 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:21 crc kubenswrapper[4825]: I0219 00:08:21.967935 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:21Z","lastTransitionTime":"2026-02-19T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.023354 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 07:30:18.970125595 +0000 UTC Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.065855 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:22 crc kubenswrapper[4825]: E0219 00:08:22.066079 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.075546 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.075603 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.075615 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.075640 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.075666 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:22Z","lastTransitionTime":"2026-02-19T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.178864 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.178938 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.178958 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.178990 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.179009 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:22Z","lastTransitionTime":"2026-02-19T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.282875 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.282931 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.282948 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.282974 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.282995 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:22Z","lastTransitionTime":"2026-02-19T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.386586 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.386655 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.386673 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.386700 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.386717 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:22Z","lastTransitionTime":"2026-02-19T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.490134 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.490225 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.490250 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.490286 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.490306 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:22Z","lastTransitionTime":"2026-02-19T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.593201 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.593273 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.593291 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.593319 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.593337 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:22Z","lastTransitionTime":"2026-02-19T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.698535 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.698604 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.698622 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.698653 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.698673 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:22Z","lastTransitionTime":"2026-02-19T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.803250 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.803320 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.803339 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.803366 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.803387 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:22Z","lastTransitionTime":"2026-02-19T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.907331 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.907396 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.907414 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.907444 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:22 crc kubenswrapper[4825]: I0219 00:08:22.907463 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:22Z","lastTransitionTime":"2026-02-19T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.010883 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.010949 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.010968 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.010992 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.011012 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:23Z","lastTransitionTime":"2026-02-19T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.023970 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:01:13.598806359 +0000 UTC Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.065833 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.065905 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.065927 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:23 crc kubenswrapper[4825]: E0219 00:08:23.066051 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:23 crc kubenswrapper[4825]: E0219 00:08:23.066191 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:23 crc kubenswrapper[4825]: E0219 00:08:23.066387 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.114231 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.114310 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.114335 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.114368 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.114397 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:23Z","lastTransitionTime":"2026-02-19T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.218544 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.218625 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.218644 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.218679 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.218701 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:23Z","lastTransitionTime":"2026-02-19T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.323310 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.323385 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.323405 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.323433 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.323452 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:23Z","lastTransitionTime":"2026-02-19T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.427628 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.427674 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.427691 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.427712 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.427726 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:23Z","lastTransitionTime":"2026-02-19T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.532199 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.532282 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.532302 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.532338 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.532359 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:23Z","lastTransitionTime":"2026-02-19T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.635944 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.636057 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.636074 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.636099 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.636116 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:23Z","lastTransitionTime":"2026-02-19T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.739326 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.739391 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.739409 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.739436 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.739456 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:23Z","lastTransitionTime":"2026-02-19T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.843712 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.843781 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.843794 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.843812 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.843827 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:23Z","lastTransitionTime":"2026-02-19T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.947259 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.947310 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.947321 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.947340 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:23 crc kubenswrapper[4825]: I0219 00:08:23.947354 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:23Z","lastTransitionTime":"2026-02-19T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.024395 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 22:43:45.838492116 +0000 UTC Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.050842 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.050915 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.050939 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.050980 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.051004 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:24Z","lastTransitionTime":"2026-02-19T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.065710 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:24 crc kubenswrapper[4825]: E0219 00:08:24.066129 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.154985 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.155079 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.155106 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.155137 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.155163 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:24Z","lastTransitionTime":"2026-02-19T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.265621 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.266986 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.266998 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.267017 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.267048 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:24Z","lastTransitionTime":"2026-02-19T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.370595 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.370660 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.370681 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.370706 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.370725 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:24Z","lastTransitionTime":"2026-02-19T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.474393 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.474465 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.474484 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.474568 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.474600 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:24Z","lastTransitionTime":"2026-02-19T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.577912 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.577974 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.577991 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.578016 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.578035 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:24Z","lastTransitionTime":"2026-02-19T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.681488 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.681626 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.681650 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.681688 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.681710 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:24Z","lastTransitionTime":"2026-02-19T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.785644 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.785710 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.785730 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.785757 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.785779 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:24Z","lastTransitionTime":"2026-02-19T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.889436 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.889540 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.889561 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.889588 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.889607 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:24Z","lastTransitionTime":"2026-02-19T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.992536 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.992611 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.992630 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.992659 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:24 crc kubenswrapper[4825]: I0219 00:08:24.992682 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:24Z","lastTransitionTime":"2026-02-19T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.025278 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:25:59.77576865 +0000 UTC Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.065461 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.065567 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.065575 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:25 crc kubenswrapper[4825]: E0219 00:08:25.065674 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:25 crc kubenswrapper[4825]: E0219 00:08:25.065789 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:25 crc kubenswrapper[4825]: E0219 00:08:25.065967 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.090950 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.095866 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.095935 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.095953 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.095978 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.095999 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:25Z","lastTransitionTime":"2026-02-19T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.115234 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.142808 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.168535 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.194553 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.199302 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.199372 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.199395 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.199428 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.199448 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:25Z","lastTransitionTime":"2026-02-19T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.211086 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.231855 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.254667 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.277379 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.298131 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.303817 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.303892 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.303913 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.303944 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.303966 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:25Z","lastTransitionTime":"2026-02-19T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.321226 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.346197 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.370645 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.392759 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.407834 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.407907 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.407963 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.407997 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.408023 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:25Z","lastTransitionTime":"2026-02-19T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.413215 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.439247 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:19Z\\\",\\\"message\\\":\\\"ntity-vrzqb\\\\nI0219 00:08:19.962159 6491 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0219 00:08:19.962179 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:08:19.962140 6491 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/mac\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.511669 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.511752 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.511779 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.511844 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.511877 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:25Z","lastTransitionTime":"2026-02-19T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.615922 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.615988 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.616006 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.616036 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.616053 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:25Z","lastTransitionTime":"2026-02-19T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.720416 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.720486 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.720597 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.720783 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.720821 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:25Z","lastTransitionTime":"2026-02-19T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.823440 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.823464 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.823496 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.823521 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.823530 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:25Z","lastTransitionTime":"2026-02-19T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.926311 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.926362 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.926379 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.926404 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:25 crc kubenswrapper[4825]: I0219 00:08:25.926422 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:25Z","lastTransitionTime":"2026-02-19T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.025823 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:17:48.183879347 +0000 UTC Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.029554 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.029668 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.029687 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.029711 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.029730 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:26Z","lastTransitionTime":"2026-02-19T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.065416 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:26 crc kubenswrapper[4825]: E0219 00:08:26.065671 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.133369 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.133753 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.134029 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.134232 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.134480 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:26Z","lastTransitionTime":"2026-02-19T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.238837 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.238923 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.238937 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.238963 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.238979 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:26Z","lastTransitionTime":"2026-02-19T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.342390 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.342446 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.342458 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.342480 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.342492 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:26Z","lastTransitionTime":"2026-02-19T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.445114 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.445178 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.445197 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.445222 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.445243 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:26Z","lastTransitionTime":"2026-02-19T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.549140 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.549676 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.549930 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.550281 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.550537 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:26Z","lastTransitionTime":"2026-02-19T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.653928 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.654621 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.654803 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.654966 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.655161 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:26Z","lastTransitionTime":"2026-02-19T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.759653 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.759714 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.759725 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.759746 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.759757 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:26Z","lastTransitionTime":"2026-02-19T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.778130 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:26 crc kubenswrapper[4825]: E0219 00:08:26.778331 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:58.778306181 +0000 UTC m=+84.469272228 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.778373 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.778410 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.778443 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.778463 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:26 crc kubenswrapper[4825]: E0219 00:08:26.778677 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:08:26 crc kubenswrapper[4825]: E0219 00:08:26.778693 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:08:26 crc kubenswrapper[4825]: E0219 00:08:26.778708 4825 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:08:26 crc kubenswrapper[4825]: E0219 00:08:26.778749 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:58.778739903 +0000 UTC m=+84.469705950 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:08:26 crc kubenswrapper[4825]: E0219 00:08:26.779173 4825 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:08:26 crc kubenswrapper[4825]: E0219 00:08:26.779219 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:58.779209805 +0000 UTC m=+84.470175862 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:08:26 crc kubenswrapper[4825]: E0219 00:08:26.779231 4825 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:08:26 crc kubenswrapper[4825]: E0219 00:08:26.779275 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:58.779264247 +0000 UTC m=+84.470230284 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:08:26 crc kubenswrapper[4825]: E0219 00:08:26.779703 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:08:26 crc kubenswrapper[4825]: E0219 00:08:26.779900 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:08:26 crc kubenswrapper[4825]: E0219 00:08:26.780034 4825 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:08:26 crc kubenswrapper[4825]: E0219 00:08:26.780248 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:58.780215232 +0000 UTC m=+84.471181319 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.863801 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.863871 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.863888 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.863913 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.863934 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:26Z","lastTransitionTime":"2026-02-19T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.967783 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.967860 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.967881 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.967911 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:26 crc kubenswrapper[4825]: I0219 00:08:26.967933 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:26Z","lastTransitionTime":"2026-02-19T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.026477 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:46:03.915274506 +0000 UTC Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.032018 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.047039 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.055967 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.065329 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:27 crc kubenswrapper[4825]: E0219 00:08:27.065452 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.065565 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.065624 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:27 crc kubenswrapper[4825]: E0219 00:08:27.065794 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:27 crc kubenswrapper[4825]: E0219 00:08:27.065955 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.070072 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.070110 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.070121 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.070141 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.070156 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:27Z","lastTransitionTime":"2026-02-19T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.074292 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.091206 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.104655 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.117982 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.138764 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.153225 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.171637 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.178935 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.178990 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.179006 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.179050 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.179066 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:27Z","lastTransitionTime":"2026-02-19T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.206527 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:19Z\\\",\\\"message\\\":\\\"ntity-vrzqb\\\\nI0219 00:08:19.962159 6491 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0219 00:08:19.962179 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:08:19.962140 6491 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/mac\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.222622 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.237496 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.251580 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.277138 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.282234 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.282274 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.282319 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.282348 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.282387 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:27Z","lastTransitionTime":"2026-02-19T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.294044 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.313779 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.327406 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.386379 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.386444 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.386462 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.386488 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.386538 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:27Z","lastTransitionTime":"2026-02-19T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.490085 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.490171 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.490190 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.490220 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.490239 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:27Z","lastTransitionTime":"2026-02-19T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.594137 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.594225 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.594244 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.594276 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.594297 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:27Z","lastTransitionTime":"2026-02-19T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.698166 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.698256 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.698278 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.698315 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.698345 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:27Z","lastTransitionTime":"2026-02-19T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.802106 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.802195 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.802223 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.802257 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.802283 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:27Z","lastTransitionTime":"2026-02-19T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.905103 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.905169 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.905188 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.905216 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.905236 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:27Z","lastTransitionTime":"2026-02-19T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:27 crc kubenswrapper[4825]: I0219 00:08:27.993033 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs\") pod \"network-metrics-daemon-bhnmw\" (UID: \"80aa664d-e111-41f6-815d-f4185e1f72ff\") " pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:27 crc kubenswrapper[4825]: E0219 00:08:27.993266 4825 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:08:27 crc kubenswrapper[4825]: E0219 00:08:27.993370 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs podName:80aa664d-e111-41f6-815d-f4185e1f72ff nodeName:}" failed. No retries permitted until 2026-02-19 00:08:43.993341956 +0000 UTC m=+69.684308043 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs") pod "network-metrics-daemon-bhnmw" (UID: "80aa664d-e111-41f6-815d-f4185e1f72ff") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.009041 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.009103 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.009124 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.009161 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.009190 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:28Z","lastTransitionTime":"2026-02-19T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.026836 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:02:40.669772663 +0000 UTC Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.065785 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:28 crc kubenswrapper[4825]: E0219 00:08:28.066018 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.114450 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.114937 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.115086 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.115224 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.115341 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:28Z","lastTransitionTime":"2026-02-19T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.218775 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.219430 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.219451 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.219476 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.219495 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:28Z","lastTransitionTime":"2026-02-19T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.323095 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.323418 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.323578 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.323735 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.323855 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:28Z","lastTransitionTime":"2026-02-19T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.428012 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.428096 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.428112 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.428138 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.428159 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:28Z","lastTransitionTime":"2026-02-19T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.533490 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.533572 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.533587 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.533615 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.533630 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:28Z","lastTransitionTime":"2026-02-19T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.637282 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.637346 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.637372 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.637404 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.637423 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:28Z","lastTransitionTime":"2026-02-19T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.741729 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.741801 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.741820 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.741847 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.741867 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:28Z","lastTransitionTime":"2026-02-19T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.845768 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.845831 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.845851 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.845879 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.845900 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:28Z","lastTransitionTime":"2026-02-19T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.949483 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.949601 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.949620 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.949647 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:28 crc kubenswrapper[4825]: I0219 00:08:28.949665 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:28Z","lastTransitionTime":"2026-02-19T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.027898 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:35:12.246366078 +0000 UTC Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.053493 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.053571 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.053584 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.053608 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.053622 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:29Z","lastTransitionTime":"2026-02-19T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.065077 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.065117 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:29 crc kubenswrapper[4825]: E0219 00:08:29.065280 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.065420 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:29 crc kubenswrapper[4825]: E0219 00:08:29.065501 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:29 crc kubenswrapper[4825]: E0219 00:08:29.065969 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.081109 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.099001 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.118905 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.152015 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:19Z\\\",\\\"message\\\":\\\"ntity-vrzqb\\\\nI0219 00:08:19.962159 6491 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0219 00:08:19.962179 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:08:19.962140 6491 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/mac\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.158737 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.158964 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.159139 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.159290 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.159478 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:29Z","lastTransitionTime":"2026-02-19T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.173423 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.196108 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3482bfb-9d88-41c4-b80c-8152e044df34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3399fe5505d69fda5547fa2b30a745b1e14c3b4efc70d848b052db43fd8d65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3479bc97f163b6b20bd4ff73f1b4c6c4a984f626a5ab0bfbf38ea47f03ea88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5ba6ec8bfc6a365ea9af422f7c6ec0479adbeef9d26a7afe8c66f4ba339482b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.213007 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.228343 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.244933 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.262541 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.262586 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.262598 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.262616 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.262630 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:29Z","lastTransitionTime":"2026-02-19T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.269588 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.285142 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.303047 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.322945 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.345954 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.366398 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.366441 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.366452 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.366470 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.366482 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:29Z","lastTransitionTime":"2026-02-19T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.366728 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.384925 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.404851 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.422314 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:29Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.470136 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.470273 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.470295 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.470360 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.470393 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:29Z","lastTransitionTime":"2026-02-19T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.573415 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.573490 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.573524 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.573546 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.573562 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:29Z","lastTransitionTime":"2026-02-19T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.678317 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.678361 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.678379 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.678402 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.678421 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:29Z","lastTransitionTime":"2026-02-19T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.782283 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.782359 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.782383 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.782414 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.782432 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:29Z","lastTransitionTime":"2026-02-19T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.886389 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.886475 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.886494 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.886567 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.886593 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:29Z","lastTransitionTime":"2026-02-19T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.990267 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.990345 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.990364 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.990391 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:29 crc kubenswrapper[4825]: I0219 00:08:29.990414 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:29Z","lastTransitionTime":"2026-02-19T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.028734 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:39:18.89294051 +0000 UTC Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.065241 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:30 crc kubenswrapper[4825]: E0219 00:08:30.065498 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.095046 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.095143 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.095161 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.095189 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.095210 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:30Z","lastTransitionTime":"2026-02-19T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.198407 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.198488 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.198546 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.198577 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.198596 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:30Z","lastTransitionTime":"2026-02-19T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.302308 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.302376 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.302394 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.302421 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.302440 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:30Z","lastTransitionTime":"2026-02-19T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.406678 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.406757 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.406815 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.406839 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.406859 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:30Z","lastTransitionTime":"2026-02-19T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.511062 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.511129 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.511148 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.511172 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.511193 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:30Z","lastTransitionTime":"2026-02-19T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.614794 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.614882 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.614909 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.614957 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.614985 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:30Z","lastTransitionTime":"2026-02-19T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.719439 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.719580 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.719604 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.719638 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.719656 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:30Z","lastTransitionTime":"2026-02-19T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.823744 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.823813 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.823839 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.823875 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.823895 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:30Z","lastTransitionTime":"2026-02-19T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.934043 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.934155 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.934208 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.934261 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:30 crc kubenswrapper[4825]: I0219 00:08:30.934286 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:30Z","lastTransitionTime":"2026-02-19T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.029685 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 11:14:55.44444246 +0000 UTC Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.037904 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.037958 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.037976 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.038005 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.038024 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:31Z","lastTransitionTime":"2026-02-19T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.065607 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.065767 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.065788 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:31 crc kubenswrapper[4825]: E0219 00:08:31.065953 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:31 crc kubenswrapper[4825]: E0219 00:08:31.066369 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:31 crc kubenswrapper[4825]: E0219 00:08:31.066690 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.141333 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.141394 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.141413 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.141441 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.141461 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:31Z","lastTransitionTime":"2026-02-19T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.244696 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.244765 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.244780 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.244803 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.244817 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:31Z","lastTransitionTime":"2026-02-19T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.348906 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.348982 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.349011 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.349049 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.349076 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:31Z","lastTransitionTime":"2026-02-19T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.452884 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.452964 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.452982 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.453013 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.453030 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:31Z","lastTransitionTime":"2026-02-19T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.556284 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.556354 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.556380 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.556413 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.556437 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:31Z","lastTransitionTime":"2026-02-19T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.661675 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.661757 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.661784 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.661808 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.661829 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:31Z","lastTransitionTime":"2026-02-19T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.765823 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.765904 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.765927 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.765956 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.765978 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:31Z","lastTransitionTime":"2026-02-19T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.869206 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.869271 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.869291 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.869329 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.869367 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:31Z","lastTransitionTime":"2026-02-19T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.972552 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.972617 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.972636 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.972663 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:31 crc kubenswrapper[4825]: I0219 00:08:31.972682 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:31Z","lastTransitionTime":"2026-02-19T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.030463 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:58:32.94167665 +0000 UTC Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.065399 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:32 crc kubenswrapper[4825]: E0219 00:08:32.065693 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.066915 4825 scope.go:117] "RemoveContainer" containerID="6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e" Feb 19 00:08:32 crc kubenswrapper[4825]: E0219 00:08:32.067228 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.076035 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.076111 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.076137 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.076172 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.076201 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:32Z","lastTransitionTime":"2026-02-19T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.179963 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.180027 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.180048 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.180076 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.180095 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:32Z","lastTransitionTime":"2026-02-19T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.283686 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.283757 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.283777 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.283809 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.283832 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:32Z","lastTransitionTime":"2026-02-19T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.324201 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.324272 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.324294 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.324329 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.324358 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:32Z","lastTransitionTime":"2026-02-19T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:32 crc kubenswrapper[4825]: E0219 00:08:32.340696 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.345455 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.345577 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.345599 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.345631 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.345651 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:32Z","lastTransitionTime":"2026-02-19T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:32 crc kubenswrapper[4825]: E0219 00:08:32.367477 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.373819 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.373884 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.373903 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.373931 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.373950 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:32Z","lastTransitionTime":"2026-02-19T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:32 crc kubenswrapper[4825]: E0219 00:08:32.389301 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.395677 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.395760 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.395782 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.395813 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.395833 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:32Z","lastTransitionTime":"2026-02-19T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:32 crc kubenswrapper[4825]: E0219 00:08:32.416645 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.422771 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.422826 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.422840 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.422863 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.422883 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:32Z","lastTransitionTime":"2026-02-19T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:32 crc kubenswrapper[4825]: E0219 00:08:32.443851 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:32 crc kubenswrapper[4825]: E0219 00:08:32.444093 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.446540 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.446589 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.446601 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.446622 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.446636 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:32Z","lastTransitionTime":"2026-02-19T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.550634 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.550699 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.550719 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.550745 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.550763 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:32Z","lastTransitionTime":"2026-02-19T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.654252 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.654340 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.654361 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.654389 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.654410 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:32Z","lastTransitionTime":"2026-02-19T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.758373 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.758608 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.758624 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.758649 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.758704 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:32Z","lastTransitionTime":"2026-02-19T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.861632 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.861732 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.861752 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.861778 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.861799 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:32Z","lastTransitionTime":"2026-02-19T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.964945 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.965059 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.965080 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.965104 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:32 crc kubenswrapper[4825]: I0219 00:08:32.965123 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:32Z","lastTransitionTime":"2026-02-19T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.030922 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 02:09:25.757990365 +0000 UTC Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.065698 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.065676 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.065910 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:33 crc kubenswrapper[4825]: E0219 00:08:33.066466 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:33 crc kubenswrapper[4825]: E0219 00:08:33.066651 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:33 crc kubenswrapper[4825]: E0219 00:08:33.066789 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.067455 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.067536 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.067555 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.067576 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.067594 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:33Z","lastTransitionTime":"2026-02-19T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.170807 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.170871 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.170894 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.170925 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.170953 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:33Z","lastTransitionTime":"2026-02-19T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.274205 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.274265 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.274284 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.274307 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.274327 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:33Z","lastTransitionTime":"2026-02-19T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.377075 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.377142 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.377166 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.377191 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.377210 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:33Z","lastTransitionTime":"2026-02-19T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.480915 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.480995 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.481013 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.481053 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.481080 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:33Z","lastTransitionTime":"2026-02-19T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.584211 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.584265 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.584278 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.584300 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.584312 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:33Z","lastTransitionTime":"2026-02-19T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.688028 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.688070 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.688080 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.688100 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.688111 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:33Z","lastTransitionTime":"2026-02-19T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.791697 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.791766 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.791784 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.791808 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.791828 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:33Z","lastTransitionTime":"2026-02-19T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.896366 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.896436 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.896456 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.896483 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.896537 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:33Z","lastTransitionTime":"2026-02-19T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.999479 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.999579 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.999599 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.999628 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:33 crc kubenswrapper[4825]: I0219 00:08:33.999650 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:33Z","lastTransitionTime":"2026-02-19T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.032152 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 23:10:00.281218221 +0000 UTC Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.065766 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:34 crc kubenswrapper[4825]: E0219 00:08:34.066016 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.104930 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.105005 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.105021 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.105046 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.105081 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:34Z","lastTransitionTime":"2026-02-19T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.209443 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.209553 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.209578 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.209609 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.209630 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:34Z","lastTransitionTime":"2026-02-19T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.312440 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.312573 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.312608 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.312647 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.312674 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:34Z","lastTransitionTime":"2026-02-19T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.415775 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.415852 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.415870 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.415901 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.415924 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:34Z","lastTransitionTime":"2026-02-19T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.524349 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.524432 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.524451 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.524481 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.524502 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:34Z","lastTransitionTime":"2026-02-19T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.627971 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.628062 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.628086 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.628120 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.628143 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:34Z","lastTransitionTime":"2026-02-19T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.731544 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.731602 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.731619 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.731647 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.731665 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:34Z","lastTransitionTime":"2026-02-19T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.835776 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.835863 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.835890 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.835960 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.835982 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:34Z","lastTransitionTime":"2026-02-19T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.939457 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.939583 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.939615 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.939652 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:34 crc kubenswrapper[4825]: I0219 00:08:34.939677 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:34Z","lastTransitionTime":"2026-02-19T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.032895 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:01:00.157148455 +0000 UTC Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.043051 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.043119 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.043139 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.043166 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.043186 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:35Z","lastTransitionTime":"2026-02-19T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.065081 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.065169 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.065169 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:35 crc kubenswrapper[4825]: E0219 00:08:35.065300 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:35 crc kubenswrapper[4825]: E0219 00:08:35.065554 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:35 crc kubenswrapper[4825]: E0219 00:08:35.065720 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.096778 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.116667 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.140530 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.147015 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.147112 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.147185 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.147270 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.147302 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:35Z","lastTransitionTime":"2026-02-19T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.161815 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.185651 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.209561 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.237380 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.251216 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.251288 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.251311 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.251339 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.251364 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:35Z","lastTransitionTime":"2026-02-19T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.260098 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.283693 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.313263 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.334821 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3482bfb-9d88-41c4-b80c-8152e044df34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3399fe5505d69fda5547fa2b30a745b1e14c3b4efc70d848b052db43fd8d65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3479bc97f163b6b20bd4ff73f1b4c6c4a984f626a5ab0bfbf38ea47f03ea88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5ba6ec8bfc6a365ea9af422f7c6ec0479adbeef9d26a7afe8c66f4ba339482b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.355608 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.355694 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.355718 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.355747 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.355767 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:35Z","lastTransitionTime":"2026-02-19T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.359575 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.380677 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.416672 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:19Z\\\",\\\"message\\\":\\\"ntity-vrzqb\\\\nI0219 00:08:19.962159 6491 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0219 00:08:19.962179 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:08:19.962140 6491 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/mac\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.437611 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.459314 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.459391 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.459417 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.459449 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.459475 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:35Z","lastTransitionTime":"2026-02-19T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.461798 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.484913 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.563918 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.564044 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.564167 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.564357 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.564382 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:35Z","lastTransitionTime":"2026-02-19T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.668150 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.668695 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.668876 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.669029 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.669603 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:35Z","lastTransitionTime":"2026-02-19T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.774765 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.774859 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.774890 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.774933 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.774960 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:35Z","lastTransitionTime":"2026-02-19T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.879494 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.879584 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.879598 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.879621 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.879635 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:35Z","lastTransitionTime":"2026-02-19T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.982669 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.982751 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.982773 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.982800 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:35 crc kubenswrapper[4825]: I0219 00:08:35.982819 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:35Z","lastTransitionTime":"2026-02-19T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.034027 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:31:06.299986614 +0000 UTC Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.065480 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:36 crc kubenswrapper[4825]: E0219 00:08:36.065784 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.085752 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.085824 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.085875 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.085903 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.086225 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:36Z","lastTransitionTime":"2026-02-19T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.190390 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.190468 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.190487 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.190544 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.190564 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:36Z","lastTransitionTime":"2026-02-19T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.294425 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.294998 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.295149 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.295312 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.295456 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:36Z","lastTransitionTime":"2026-02-19T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.399361 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.399441 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.399468 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.399550 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.399579 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:36Z","lastTransitionTime":"2026-02-19T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.502456 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.502579 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.502600 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.502633 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.502655 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:36Z","lastTransitionTime":"2026-02-19T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.605701 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.606087 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.606241 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.606397 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.606577 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:36Z","lastTransitionTime":"2026-02-19T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.710948 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.711565 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.711603 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.711639 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.711662 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:36Z","lastTransitionTime":"2026-02-19T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.815986 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.816068 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.816088 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.816118 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.816138 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:36Z","lastTransitionTime":"2026-02-19T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.919950 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.920037 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.920063 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.920097 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:36 crc kubenswrapper[4825]: I0219 00:08:36.920124 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:36Z","lastTransitionTime":"2026-02-19T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.024902 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.024964 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.024983 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.025009 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.025028 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:37Z","lastTransitionTime":"2026-02-19T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.034315 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 00:56:20.677090906 +0000 UTC Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.066141 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:37 crc kubenswrapper[4825]: E0219 00:08:37.066355 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.066739 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:37 crc kubenswrapper[4825]: E0219 00:08:37.066865 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.067209 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:37 crc kubenswrapper[4825]: E0219 00:08:37.067337 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.128906 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.129019 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.129041 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.129098 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.129118 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:37Z","lastTransitionTime":"2026-02-19T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.234139 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.234203 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.234233 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.234265 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.234291 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:37Z","lastTransitionTime":"2026-02-19T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.337962 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.338018 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.338039 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.338063 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.338081 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:37Z","lastTransitionTime":"2026-02-19T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.441663 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.441714 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.441732 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.441755 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.441775 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:37Z","lastTransitionTime":"2026-02-19T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.545073 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.545584 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.545616 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.545640 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.545656 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:37Z","lastTransitionTime":"2026-02-19T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.648916 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.648979 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.648999 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.649025 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.649043 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:37Z","lastTransitionTime":"2026-02-19T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.754311 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.754380 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.754399 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.754431 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.754453 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:37Z","lastTransitionTime":"2026-02-19T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.857969 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.858037 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.858059 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.858084 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.858102 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:37Z","lastTransitionTime":"2026-02-19T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.961230 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.961325 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.961352 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.961387 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:37 crc kubenswrapper[4825]: I0219 00:08:37.961412 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:37Z","lastTransitionTime":"2026-02-19T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.035332 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:33:59.826718578 +0000 UTC Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.064131 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.064179 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.064191 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.064229 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.064249 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:38Z","lastTransitionTime":"2026-02-19T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.065199 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:38 crc kubenswrapper[4825]: E0219 00:08:38.065433 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.167198 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.167281 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.167306 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.167954 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.168280 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:38Z","lastTransitionTime":"2026-02-19T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.272153 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.272218 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.272236 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.272262 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.272282 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:38Z","lastTransitionTime":"2026-02-19T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.375079 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.375138 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.375156 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.375186 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.375208 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:38Z","lastTransitionTime":"2026-02-19T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.478494 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.478592 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.478610 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.478628 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.478639 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:38Z","lastTransitionTime":"2026-02-19T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.581830 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.581875 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.581887 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.581908 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.581921 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:38Z","lastTransitionTime":"2026-02-19T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.684648 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.684692 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.684724 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.684749 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.684764 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:38Z","lastTransitionTime":"2026-02-19T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.788243 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.788298 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.788311 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.788335 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.788349 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:38Z","lastTransitionTime":"2026-02-19T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.891607 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.891669 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.891686 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.891706 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.891718 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:38Z","lastTransitionTime":"2026-02-19T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.994550 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.994623 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.994645 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.994677 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:38 crc kubenswrapper[4825]: I0219 00:08:38.994702 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:38Z","lastTransitionTime":"2026-02-19T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.036148 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:52:46.371176575 +0000 UTC Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.065118 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.065161 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:39 crc kubenswrapper[4825]: E0219 00:08:39.065261 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.065337 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:39 crc kubenswrapper[4825]: E0219 00:08:39.065407 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:39 crc kubenswrapper[4825]: E0219 00:08:39.065595 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.098494 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.098589 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.098605 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.098629 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.098647 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:39Z","lastTransitionTime":"2026-02-19T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.201787 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.201852 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.201870 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.201897 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.201914 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:39Z","lastTransitionTime":"2026-02-19T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.304995 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.305046 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.305063 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.305090 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.305109 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:39Z","lastTransitionTime":"2026-02-19T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.408354 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.408414 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.408427 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.408450 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.408469 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:39Z","lastTransitionTime":"2026-02-19T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.511347 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.511413 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.511422 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.511437 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.511449 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:39Z","lastTransitionTime":"2026-02-19T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.620397 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.620765 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.620804 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.620829 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.620845 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:39Z","lastTransitionTime":"2026-02-19T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.723173 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.723211 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.723220 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.723234 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.723243 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:39Z","lastTransitionTime":"2026-02-19T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.826642 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.826962 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.827050 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.827140 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.827237 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:39Z","lastTransitionTime":"2026-02-19T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.930497 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.930586 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.930599 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.930618 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:39 crc kubenswrapper[4825]: I0219 00:08:39.930632 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:39Z","lastTransitionTime":"2026-02-19T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.034600 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.034663 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.034679 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.034700 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.034713 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:40Z","lastTransitionTime":"2026-02-19T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.036808 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 21:19:19.754362946 +0000 UTC Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.065441 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:40 crc kubenswrapper[4825]: E0219 00:08:40.065669 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.138845 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.138935 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.138954 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.138984 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.139005 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:40Z","lastTransitionTime":"2026-02-19T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.241895 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.241983 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.241996 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.242066 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.242082 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:40Z","lastTransitionTime":"2026-02-19T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.345676 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.345748 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.345765 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.345792 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.345811 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:40Z","lastTransitionTime":"2026-02-19T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.450719 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.450803 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.450826 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.450858 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.450883 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:40Z","lastTransitionTime":"2026-02-19T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.554735 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.554791 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.554804 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.554822 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.554835 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:40Z","lastTransitionTime":"2026-02-19T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.658271 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.658354 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.658386 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.658426 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.658456 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:40Z","lastTransitionTime":"2026-02-19T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.761559 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.761626 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.761644 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.761669 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.761690 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:40Z","lastTransitionTime":"2026-02-19T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.865146 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.865232 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.865251 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.865284 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.865305 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:40Z","lastTransitionTime":"2026-02-19T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.969083 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.969183 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.969203 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.969231 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:40 crc kubenswrapper[4825]: I0219 00:08:40.969257 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:40Z","lastTransitionTime":"2026-02-19T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.037060 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 05:22:55.474465791 +0000 UTC Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.065404 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.065429 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:41 crc kubenswrapper[4825]: E0219 00:08:41.065612 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.065708 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:41 crc kubenswrapper[4825]: E0219 00:08:41.065775 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:41 crc kubenswrapper[4825]: E0219 00:08:41.065955 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.071869 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.071905 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.071918 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.071935 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.071951 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:41Z","lastTransitionTime":"2026-02-19T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.175269 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.175384 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.175402 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.175429 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.175444 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:41Z","lastTransitionTime":"2026-02-19T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.278827 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.278906 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.278919 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.278935 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.278944 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:41Z","lastTransitionTime":"2026-02-19T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.382890 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.382942 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.382955 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.382976 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.382993 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:41Z","lastTransitionTime":"2026-02-19T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.486021 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.486074 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.486088 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.486118 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.486133 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:41Z","lastTransitionTime":"2026-02-19T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.589647 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.589703 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.589715 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.589736 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.589751 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:41Z","lastTransitionTime":"2026-02-19T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.692907 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.692968 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.692983 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.693008 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.693024 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:41Z","lastTransitionTime":"2026-02-19T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.795327 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.795363 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.795373 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.795389 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.795400 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:41Z","lastTransitionTime":"2026-02-19T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.899122 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.899163 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.899173 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.899187 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:41 crc kubenswrapper[4825]: I0219 00:08:41.899197 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:41Z","lastTransitionTime":"2026-02-19T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.002335 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.002382 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.002400 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.002416 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.002429 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:42Z","lastTransitionTime":"2026-02-19T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.038023 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 19:45:15.28215609 +0000 UTC Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.065397 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:42 crc kubenswrapper[4825]: E0219 00:08:42.065601 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.105087 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.105146 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.105165 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.105186 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.105200 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:42Z","lastTransitionTime":"2026-02-19T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.208441 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.208523 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.208538 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.208557 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.208579 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:42Z","lastTransitionTime":"2026-02-19T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.311370 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.311431 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.311445 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.311465 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.311478 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:42Z","lastTransitionTime":"2026-02-19T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.414819 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.414894 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.414908 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.414924 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.414935 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:42Z","lastTransitionTime":"2026-02-19T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.518094 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.518170 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.518214 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.518243 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.518262 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:42Z","lastTransitionTime":"2026-02-19T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.621053 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.621658 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.621819 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.621967 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.622090 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:42Z","lastTransitionTime":"2026-02-19T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.725221 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.725307 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.725322 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.725345 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.725359 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:42Z","lastTransitionTime":"2026-02-19T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.792480 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.792560 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.792575 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.792597 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.792611 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:42Z","lastTransitionTime":"2026-02-19T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:42 crc kubenswrapper[4825]: E0219 00:08:42.809043 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.813753 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.813828 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.813842 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.813859 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.813872 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:42Z","lastTransitionTime":"2026-02-19T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:42 crc kubenswrapper[4825]: E0219 00:08:42.834461 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.839833 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.839875 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.839885 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.839902 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.839913 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:42Z","lastTransitionTime":"2026-02-19T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:42 crc kubenswrapper[4825]: E0219 00:08:42.853905 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.859016 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.859060 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.859074 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.859091 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.859105 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:42Z","lastTransitionTime":"2026-02-19T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:42 crc kubenswrapper[4825]: E0219 00:08:42.872042 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.875105 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.875135 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.875146 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.875162 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.875172 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:42Z","lastTransitionTime":"2026-02-19T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:42 crc kubenswrapper[4825]: E0219 00:08:42.891009 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:42 crc kubenswrapper[4825]: E0219 00:08:42.891168 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.893050 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.893080 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.893091 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.893106 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.893119 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:42Z","lastTransitionTime":"2026-02-19T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.995803 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.995854 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.995866 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.995885 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:42 crc kubenswrapper[4825]: I0219 00:08:42.995894 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:42Z","lastTransitionTime":"2026-02-19T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.038667 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:05:49.377333374 +0000 UTC Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.065018 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.065071 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.065079 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:43 crc kubenswrapper[4825]: E0219 00:08:43.065178 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:43 crc kubenswrapper[4825]: E0219 00:08:43.065259 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:43 crc kubenswrapper[4825]: E0219 00:08:43.065372 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.099220 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.099285 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.099297 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.099317 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.099329 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:43Z","lastTransitionTime":"2026-02-19T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.203362 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.203422 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.203432 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.203454 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.203467 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:43Z","lastTransitionTime":"2026-02-19T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.305699 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.305773 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.305786 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.305819 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.305838 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:43Z","lastTransitionTime":"2026-02-19T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.408659 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.408712 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.408724 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.408747 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.408762 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:43Z","lastTransitionTime":"2026-02-19T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.511153 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.511180 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.511190 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.511203 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.511213 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:43Z","lastTransitionTime":"2026-02-19T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.614093 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.614139 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.614152 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.614170 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.614187 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:43Z","lastTransitionTime":"2026-02-19T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.717420 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.717480 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.717493 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.717534 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.717548 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:43Z","lastTransitionTime":"2026-02-19T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.820919 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.820974 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.820985 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.821014 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.821027 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:43Z","lastTransitionTime":"2026-02-19T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.926563 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.926635 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.926653 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.926702 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:43 crc kubenswrapper[4825]: I0219 00:08:43.926717 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:43Z","lastTransitionTime":"2026-02-19T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.011742 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs\") pod \"network-metrics-daemon-bhnmw\" (UID: \"80aa664d-e111-41f6-815d-f4185e1f72ff\") " pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:44 crc kubenswrapper[4825]: E0219 00:08:44.012112 4825 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:08:44 crc kubenswrapper[4825]: E0219 00:08:44.012608 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs podName:80aa664d-e111-41f6-815d-f4185e1f72ff nodeName:}" failed. No retries permitted until 2026-02-19 00:09:16.012539895 +0000 UTC m=+101.703505982 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs") pod "network-metrics-daemon-bhnmw" (UID: "80aa664d-e111-41f6-815d-f4185e1f72ff") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.029530 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.029585 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.029609 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.029634 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.029655 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:44Z","lastTransitionTime":"2026-02-19T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.039722 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 22:56:13.46425319 +0000 UTC Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.065226 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:44 crc kubenswrapper[4825]: E0219 00:08:44.065483 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.080184 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.132831 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.132898 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.132921 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.132948 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.132968 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:44Z","lastTransitionTime":"2026-02-19T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.236251 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.236322 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.236342 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.236370 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.236392 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:44Z","lastTransitionTime":"2026-02-19T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.339239 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.339280 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.339315 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.339335 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.339347 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:44Z","lastTransitionTime":"2026-02-19T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.443422 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.443493 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.443539 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.443575 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.443603 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:44Z","lastTransitionTime":"2026-02-19T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.546572 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.546626 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.546641 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.546663 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.546680 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:44Z","lastTransitionTime":"2026-02-19T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.612933 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfx7x_2daa6777-c1b1-4fae-9c14-cfe10867288a/kube-multus/0.log" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.612977 4825 generic.go:334] "Generic (PLEG): container finished" podID="2daa6777-c1b1-4fae-9c14-cfe10867288a" containerID="5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a" exitCode=1 Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.613550 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zfx7x" event={"ID":"2daa6777-c1b1-4fae-9c14-cfe10867288a","Type":"ContainerDied","Data":"5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a"} Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.613802 4825 scope.go:117] "RemoveContainer" containerID="5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.634856 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.649987 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.650032 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.650043 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.650061 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.650072 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:44Z","lastTransitionTime":"2026-02-19T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.651991 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.667875 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.684405 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:44Z\\\",\\\"message\\\":\\\"2026-02-19T00:07:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2ae34d03-1e68-4bc9-810f-e950f6e3733c\\\\n2026-02-19T00:07:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2ae34d03-1e68-4bc9-810f-e950f6e3733c to /host/opt/cni/bin/\\\\n2026-02-19T00:07:59Z [verbose] multus-daemon started\\\\n2026-02-19T00:07:59Z [verbose] Readiness Indicator file check\\\\n2026-02-19T00:08:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.707878 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.729675 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.909393 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.912029 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.912080 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.912094 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.912115 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.912129 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:44Z","lastTransitionTime":"2026-02-19T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.934805 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.949552 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.967722 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7163b7fc-078f-4584-b38a-07ca1c80a2f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bb2cff8e4fc6646f9ef1474f78f40ba15d5a6ebc3a650f358f0dc714a70652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:44 crc kubenswrapper[4825]: I0219 00:08:44.983887 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.015025 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.015090 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.015109 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.015132 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.015153 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:45Z","lastTransitionTime":"2026-02-19T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.017944 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:19Z\\\",\\\"message\\\":\\\"ntity-vrzqb\\\\nI0219 00:08:19.962159 6491 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0219 00:08:19.962179 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:08:19.962140 6491 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/mac\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.040706 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:15:43.265942428 +0000 UTC Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.046133 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.063087 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3482bfb-9d88-41c4-b80c-8152e044df34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3399fe5505d69fda5547fa2b30a745b1e14c3b4efc70d848b052db43fd8d65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3479bc97f163b6b20bd4ff73f1b4c6c4a984f626a5ab0bfbf38ea47f03ea88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5ba6ec8bfc6a365ea9af422f7c6ec0479adbeef9d26a7afe8c66f4ba339482b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.065275 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.065368 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:45 crc kubenswrapper[4825]: E0219 00:08:45.065417 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.065375 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:45 crc kubenswrapper[4825]: E0219 00:08:45.065546 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:45 crc kubenswrapper[4825]: E0219 00:08:45.065645 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.066241 4825 scope.go:117] "RemoveContainer" containerID="6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.092916 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.106123 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.118297 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.118336 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.118349 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.118367 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.118379 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:45Z","lastTransitionTime":"2026-02-19T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.119999 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.132415 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.148531 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.161987 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.174838 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:44Z\\\",\\\"message\\\":\\\"2026-02-19T00:07:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2ae34d03-1e68-4bc9-810f-e950f6e3733c\\\\n2026-02-19T00:07:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2ae34d03-1e68-4bc9-810f-e950f6e3733c to /host/opt/cni/bin/\\\\n2026-02-19T00:07:59Z [verbose] multus-daemon started\\\\n2026-02-19T00:07:59Z [verbose] Readiness Indicator file check\\\\n2026-02-19T00:08:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.185609 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.199434 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.213320 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.221945 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.222032 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.222046 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.222114 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.222131 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:45Z","lastTransitionTime":"2026-02-19T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.230637 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7163b7fc-078f-4584-b38a-07ca1c80a2f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bb2cff8e4fc6646f9ef1474f78f40ba15d5a6ebc3a650f358f0dc714a70652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.243858 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.257937 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.272484 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.285431 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.298671 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3482bfb-9d88-41c4-b80c-8152e044df34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3399fe5505d69fda5547fa2b30a745b1e14c3b4efc70d848b052db43fd8d65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3479bc97f163b6b20bd4ff73f1b4c6c4a984f626a5ab0bfbf38ea47f03ea88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5ba6ec8bfc6a365ea9af422f7c6ec0479adbeef9d26a7afe8c66f4ba339482b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.310833 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.322666 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.324375 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.324430 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.324452 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.324476 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.324497 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:45Z","lastTransitionTime":"2026-02-19T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.347314 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:19Z\\\",\\\"message\\\":\\\"ntity-vrzqb\\\\nI0219 00:08:19.962159 6491 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0219 00:08:19.962179 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:08:19.962140 6491 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/mac\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.360488 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.374929 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.386400 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.426770 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.426807 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.426816 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.426830 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.426840 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:45Z","lastTransitionTime":"2026-02-19T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.529527 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.529577 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.529586 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.529603 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.529621 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:45Z","lastTransitionTime":"2026-02-19T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.618753 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovnkube-controller/2.log" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.621709 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerStarted","Data":"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f"} Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.622080 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.625302 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfx7x_2daa6777-c1b1-4fae-9c14-cfe10867288a/kube-multus/0.log" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.625334 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zfx7x" event={"ID":"2daa6777-c1b1-4fae-9c14-cfe10867288a","Type":"ContainerStarted","Data":"1e9505df7615b5027220ed25ee309b0a066503c64d3f8f45ef3fc23de1af2ac4"} Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.631906 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.631962 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.631978 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.632001 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.632024 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:45Z","lastTransitionTime":"2026-02-19T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.640882 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.653660 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.670325 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7163b7fc-078f-4584-b38a-07ca1c80a2f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bb2cff8e4fc6646f9ef1474f78f40ba15d5a6ebc3a650f358f0dc714a70652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.689851 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.704317 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.722878 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.734292 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.734335 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.734346 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.734382 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.734395 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:45Z","lastTransitionTime":"2026-02-19T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.737308 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.750842 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3482bfb-9d88-41c4-b80c-8152e044df34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3399fe5505d69fda5547fa2b30a745b1e14c3b4efc70d848b052db43fd8d65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3479bc97f163b6b20bd4ff73f1b4c6c4a984f626a5ab0bfbf38ea47f03ea88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5ba6ec8bfc6a365ea9af422f7c6ec0479adbeef9d26a7afe8c66f4ba339482b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.768426 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.785020 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.809942 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:19Z\\\",\\\"message\\\":\\\"ntity-vrzqb\\\\nI0219 00:08:19.962159 6491 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0219 00:08:19.962179 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:08:19.962140 6491 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/mac\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.822040 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.836605 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.836632 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.836641 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.836658 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.836672 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:45Z","lastTransitionTime":"2026-02-19T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.841205 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.851818 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.868987 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.881001 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.900208 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:44Z\\\",\\\"message\\\":\\\"2026-02-19T00:07:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2ae34d03-1e68-4bc9-810f-e950f6e3733c\\\\n2026-02-19T00:07:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2ae34d03-1e68-4bc9-810f-e950f6e3733c to /host/opt/cni/bin/\\\\n2026-02-19T00:07:59Z [verbose] multus-daemon started\\\\n2026-02-19T00:07:59Z [verbose] Readiness Indicator file check\\\\n2026-02-19T00:08:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.915539 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.931343 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.939362 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.939427 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.939446 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.939474 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.939492 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:45Z","lastTransitionTime":"2026-02-19T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.947024 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.962616 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.975112 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:45 crc kubenswrapper[4825]: I0219 00:08:45.989744 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.003553 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.016955 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7163b7fc-078f-4584-b38a-07ca1c80a2f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bb2cff8e4fc6646f9ef1474f78f40ba15d5a6ebc3a650f358f0dc714a70652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.030716 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.041073 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:04:13.191552887 +0000 UTC Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.041827 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.042690 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.042743 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.042756 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.042782 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.042796 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:46Z","lastTransitionTime":"2026-02-19T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.065447 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:46 crc kubenswrapper[4825]: E0219 00:08:46.065805 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.071949 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:19Z\\\",\\\"message\\\":\\\"ntity-vrzqb\\\\nI0219 00:08:19.962159 6491 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0219 00:08:19.962179 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:08:19.962140 6491 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/mac\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.085113 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.098042 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3482bfb-9d88-41c4-b80c-8152e044df34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3399fe5505d69fda5547fa2b30a745b1e14c3b4efc70d848b052db43fd8d65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3479bc97f163b6b20bd4ff73f1b4c6c4a984f626a5ab0bfbf38ea47f03ea88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5ba6ec8bfc6a365ea9af422f7c6ec0479adbeef9d26a7afe8c66f4ba339482b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.112055 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.127624 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.139579 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.145888 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.145925 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.145939 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.145957 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.145973 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:46Z","lastTransitionTime":"2026-02-19T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.153756 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9505df7615b5027220ed25ee309b0a066503c64d3f8f45ef3fc23de1af2ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:44Z\\\",\\\"message\\\":\\\"2026-02-19T00:07:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2ae34d03-1e68-4bc9-810f-e950f6e3733c\\\\n2026-02-19T00:07:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2ae34d03-1e68-4bc9-810f-e950f6e3733c to /host/opt/cni/bin/\\\\n2026-02-19T00:07:59Z [verbose] multus-daemon started\\\\n2026-02-19T00:07:59Z [verbose] Readiness Indicator file check\\\\n2026-02-19T00:08:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.165582 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.180338 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.249343 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.249382 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.249394 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.249415 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.249433 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:46Z","lastTransitionTime":"2026-02-19T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.352794 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.352870 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.352897 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.352933 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.352958 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:46Z","lastTransitionTime":"2026-02-19T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.456670 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.456755 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.456775 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.456805 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.456823 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:46Z","lastTransitionTime":"2026-02-19T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.560574 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.560659 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.560678 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.560704 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.560721 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:46Z","lastTransitionTime":"2026-02-19T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.631886 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovnkube-controller/3.log" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.633073 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovnkube-controller/2.log" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.637418 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerID="28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f" exitCode=1 Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.637493 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerDied","Data":"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f"} Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.637611 4825 scope.go:117] "RemoveContainer" containerID="6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.639078 4825 scope.go:117] "RemoveContainer" containerID="28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f" Feb 19 00:08:46 crc kubenswrapper[4825]: E0219 00:08:46.639379 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.656749 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7163b7fc-078f-4584-b38a-07ca1c80a2f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bb2cff8e4fc6646f9ef1474f78f40ba15d5a6ebc3a650f358f0dc714a70652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.664237 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.664295 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.664316 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.664345 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.664365 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:46Z","lastTransitionTime":"2026-02-19T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.675675 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.694969 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.717137 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.732574 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.752387 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.766294 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.768910 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.768986 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.769005 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.769031 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.769049 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:46Z","lastTransitionTime":"2026-02-19T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.781416 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3482bfb-9d88-41c4-b80c-8152e044df34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3399fe5505d69fda5547fa2b30a745b1e14c3b4efc70d848b052db43fd8d65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3479bc97f163b6b20bd4ff73f1b4c6c4a984f626a5ab0bfbf38ea47f03ea88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5ba6ec8bfc6a365ea9af422f7c6ec0479adbeef9d26a7afe8c66f4ba339482b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.799781 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.817598 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.844254 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c73d44b574f3092f21b5507b5ef49f92f4bd0b25992d554c48853774558c12e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:19Z\\\",\\\"message\\\":\\\"ntity-vrzqb\\\\nI0219 00:08:19.962159 6491 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0219 00:08:19.962179 6491 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:19Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:08:19.962140 6491 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-operator-machine-webhook]} name:Service_openshift-machine-api/mac\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:45Z\\\",\\\"message\\\":\\\"IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0219 00:08:45.924480 6837 services_controller.go:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 2.595542ms\\\\nI0219 00:08:45.924553 6837 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0219 00:08:45.924578 6837 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0219 00:08:45.924369 6837 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.858275 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.872260 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.872318 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.872330 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.872349 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.872360 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:46Z","lastTransitionTime":"2026-02-19T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.874143 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.886398 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.900904 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.914064 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.928277 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9505df7615b5027220ed25ee309b0a066503c64d3f8f45ef3fc23de1af2ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:44Z\\\",\\\"message\\\":\\\"2026-02-19T00:07:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2ae34d03-1e68-4bc9-810f-e950f6e3733c\\\\n2026-02-19T00:07:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2ae34d03-1e68-4bc9-810f-e950f6e3733c to /host/opt/cni/bin/\\\\n2026-02-19T00:07:59Z [verbose] multus-daemon started\\\\n2026-02-19T00:07:59Z [verbose] Readiness Indicator file check\\\\n2026-02-19T00:08:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.939528 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:46Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.975804 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.975850 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.975860 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.975876 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:46 crc kubenswrapper[4825]: I0219 00:08:46.975889 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:46Z","lastTransitionTime":"2026-02-19T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.041928 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:00:44.629746393 +0000 UTC Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.065625 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.065689 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.065782 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:47 crc kubenswrapper[4825]: E0219 00:08:47.065794 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:47 crc kubenswrapper[4825]: E0219 00:08:47.065908 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:47 crc kubenswrapper[4825]: E0219 00:08:47.066013 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.078233 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.078273 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.078284 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.078301 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.078311 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:47Z","lastTransitionTime":"2026-02-19T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.181745 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.181802 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.181814 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.181839 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.181853 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:47Z","lastTransitionTime":"2026-02-19T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.284349 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.284400 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.284410 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.284429 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.284442 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:47Z","lastTransitionTime":"2026-02-19T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.387059 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.387114 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.387127 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.387145 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.387158 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:47Z","lastTransitionTime":"2026-02-19T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.490930 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.490976 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.490986 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.491004 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.491015 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:47Z","lastTransitionTime":"2026-02-19T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.593614 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.593684 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.593697 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.593717 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.593733 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:47Z","lastTransitionTime":"2026-02-19T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.643447 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovnkube-controller/3.log" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.648927 4825 scope.go:117] "RemoveContainer" containerID="28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f" Feb 19 00:08:47 crc kubenswrapper[4825]: E0219 00:08:47.649217 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.663899 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.678283 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.690189 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.696384 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.696416 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.696424 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.696467 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.696480 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:47Z","lastTransitionTime":"2026-02-19T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.703370 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9505df7615b5027220ed25ee309b0a066503c64d3f8f45ef3fc23de1af2ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:44Z\\\",\\\"message\\\":\\\"2026-02-19T00:07:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2ae34d03-1e68-4bc9-810f-e950f6e3733c\\\\n2026-02-19T00:07:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2ae34d03-1e68-4bc9-810f-e950f6e3733c to /host/opt/cni/bin/\\\\n2026-02-19T00:07:59Z [verbose] multus-daemon started\\\\n2026-02-19T00:07:59Z [verbose] Readiness Indicator file check\\\\n2026-02-19T00:08:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.716013 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.728442 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.744266 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.761809 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.776024 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.796065 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.799167 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.799206 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.799218 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.799238 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.799250 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:47Z","lastTransitionTime":"2026-02-19T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.819441 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.832972 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.848433 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7163b7fc-078f-4584-b38a-07ca1c80a2f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bb2cff8e4fc6646f9ef1474f78f40ba15d5a6ebc3a650f358f0dc714a70652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.864320 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.882530 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.902325 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.902374 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.902385 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.902402 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.902293 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:45Z\\\",\\\"message\\\":\\\"IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0219 00:08:45.924480 6837 services_controller.go:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 2.595542ms\\\\nI0219 00:08:45.924553 6837 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0219 00:08:45.924578 6837 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0219 00:08:45.924369 6837 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.902426 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:47Z","lastTransitionTime":"2026-02-19T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.923591 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:47 crc kubenswrapper[4825]: I0219 00:08:47.939492 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3482bfb-9d88-41c4-b80c-8152e044df34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3399fe5505d69fda5547fa2b30a745b1e14c3b4efc70d848b052db43fd8d65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3479bc97f163b6b20bd4ff73f1b4c6c4a984f626a5ab0bfbf38ea47f03ea88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5ba6ec8bfc6a365ea9af422f7c6ec0479adbeef9d26a7afe8c66f4ba339482b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.007337 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.007402 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.007415 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.007438 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.007451 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:48Z","lastTransitionTime":"2026-02-19T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.042692 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:26:37.382161326 +0000 UTC Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.065309 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:48 crc kubenswrapper[4825]: E0219 00:08:48.065487 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.111462 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.111532 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.111545 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.111567 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.111582 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:48Z","lastTransitionTime":"2026-02-19T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.214898 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.214950 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.214962 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.214985 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.214996 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:48Z","lastTransitionTime":"2026-02-19T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.318125 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.318212 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.318234 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.318264 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.318284 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:48Z","lastTransitionTime":"2026-02-19T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.421175 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.421238 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.421257 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.421286 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.421305 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:48Z","lastTransitionTime":"2026-02-19T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.524674 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.524724 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.524736 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.524758 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.524773 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:48Z","lastTransitionTime":"2026-02-19T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.627408 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.627471 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.627484 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.627537 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.627580 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:48Z","lastTransitionTime":"2026-02-19T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.730213 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.730264 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.730318 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.730355 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.730369 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:48Z","lastTransitionTime":"2026-02-19T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.832192 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.832232 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.832244 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.832259 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.832292 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:48Z","lastTransitionTime":"2026-02-19T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.935419 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.935484 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.935494 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.935524 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:48 crc kubenswrapper[4825]: I0219 00:08:48.935537 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:48Z","lastTransitionTime":"2026-02-19T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.038770 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.038810 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.038823 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.038840 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.038852 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:49Z","lastTransitionTime":"2026-02-19T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.043172 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:58:59.547991301 +0000 UTC Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.065326 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.065446 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:49 crc kubenswrapper[4825]: E0219 00:08:49.065590 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.065700 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:49 crc kubenswrapper[4825]: E0219 00:08:49.065741 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:49 crc kubenswrapper[4825]: E0219 00:08:49.065906 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.142569 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.142640 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.142662 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.142696 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.142720 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:49Z","lastTransitionTime":"2026-02-19T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.246147 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.246223 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.246272 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.246302 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.246328 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:49Z","lastTransitionTime":"2026-02-19T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.350845 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.350916 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.350935 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.350962 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.350990 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:49Z","lastTransitionTime":"2026-02-19T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.454948 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.455022 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.455041 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.455069 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.455088 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:49Z","lastTransitionTime":"2026-02-19T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.559238 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.559316 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.559336 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.559362 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.559378 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:49Z","lastTransitionTime":"2026-02-19T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.663429 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.663498 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.663547 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.663579 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.663600 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:49Z","lastTransitionTime":"2026-02-19T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.766448 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.766549 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.766568 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.766597 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.766617 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:49Z","lastTransitionTime":"2026-02-19T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.870273 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.870372 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.870391 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.870423 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.870445 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:49Z","lastTransitionTime":"2026-02-19T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.972954 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.973040 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.973063 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.973094 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:49 crc kubenswrapper[4825]: I0219 00:08:49.973116 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:49Z","lastTransitionTime":"2026-02-19T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.044128 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 07:58:32.387485292 +0000 UTC Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.064932 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:50 crc kubenswrapper[4825]: E0219 00:08:50.065174 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.076668 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.076755 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.076778 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.076808 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.076827 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:50Z","lastTransitionTime":"2026-02-19T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.181481 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.181592 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.181611 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.181643 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.181662 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:50Z","lastTransitionTime":"2026-02-19T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.285300 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.285426 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.285447 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.285477 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.285535 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:50Z","lastTransitionTime":"2026-02-19T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.389298 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.389362 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.389379 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.389404 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.389422 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:50Z","lastTransitionTime":"2026-02-19T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.492999 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.493070 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.493091 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.493118 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.493136 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:50Z","lastTransitionTime":"2026-02-19T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.597083 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.597157 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.597177 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.597203 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.597222 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:50Z","lastTransitionTime":"2026-02-19T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.700448 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.700570 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.700597 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.700630 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.700655 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:50Z","lastTransitionTime":"2026-02-19T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.803578 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.803656 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.803681 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.803707 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.803727 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:50Z","lastTransitionTime":"2026-02-19T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.906952 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.907027 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.907047 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.907072 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:50 crc kubenswrapper[4825]: I0219 00:08:50.907089 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:50Z","lastTransitionTime":"2026-02-19T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.011405 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.011470 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.011488 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.011538 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.011557 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:51Z","lastTransitionTime":"2026-02-19T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.044349 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:33:06.508987014 +0000 UTC Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.065293 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.065326 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:51 crc kubenswrapper[4825]: E0219 00:08:51.065912 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.065471 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:51 crc kubenswrapper[4825]: E0219 00:08:51.065985 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:51 crc kubenswrapper[4825]: E0219 00:08:51.066435 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.118650 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.118722 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.118740 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.118767 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.118790 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:51Z","lastTransitionTime":"2026-02-19T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.222974 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.223046 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.223065 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.223098 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.223121 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:51Z","lastTransitionTime":"2026-02-19T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.326690 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.327198 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.327344 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.327649 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.327818 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:51Z","lastTransitionTime":"2026-02-19T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.431073 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.431169 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.431189 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.431218 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.431238 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:51Z","lastTransitionTime":"2026-02-19T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.535222 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.535298 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.535326 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.535359 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.535382 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:51Z","lastTransitionTime":"2026-02-19T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.639223 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.639304 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.639340 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.639371 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.639397 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:51Z","lastTransitionTime":"2026-02-19T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.742850 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.743238 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.743412 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.743627 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.743779 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:51Z","lastTransitionTime":"2026-02-19T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.848019 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.848466 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.848722 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.848940 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.849139 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:51Z","lastTransitionTime":"2026-02-19T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.952359 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.952433 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.952447 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.952466 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:51 crc kubenswrapper[4825]: I0219 00:08:51.952479 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:51Z","lastTransitionTime":"2026-02-19T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.044535 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:31:21.988447761 +0000 UTC Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.055620 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.056372 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.056590 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.056922 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.057077 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:52Z","lastTransitionTime":"2026-02-19T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.064979 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:52 crc kubenswrapper[4825]: E0219 00:08:52.065204 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.160710 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.161076 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.161232 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.161413 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.161636 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:52Z","lastTransitionTime":"2026-02-19T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.264912 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.265233 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.265430 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.265711 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.265898 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:52Z","lastTransitionTime":"2026-02-19T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.369291 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.369747 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.369942 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.370195 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.370628 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:52Z","lastTransitionTime":"2026-02-19T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.473824 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.473883 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.473894 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.473912 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.473927 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:52Z","lastTransitionTime":"2026-02-19T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.577941 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.578004 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.578023 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.578050 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.578067 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:52Z","lastTransitionTime":"2026-02-19T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.681784 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.681875 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.681902 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.681936 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.681970 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:52Z","lastTransitionTime":"2026-02-19T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.784868 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.784944 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.784964 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.784990 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.785016 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:52Z","lastTransitionTime":"2026-02-19T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.888400 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.888466 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.888480 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.888499 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.888533 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:52Z","lastTransitionTime":"2026-02-19T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.991072 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.991161 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.991177 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.991198 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:52 crc kubenswrapper[4825]: I0219 00:08:52.991216 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:52Z","lastTransitionTime":"2026-02-19T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.045203 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 01:43:18.057740277 +0000 UTC Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.058361 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.058413 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.058426 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.058446 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.058460 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:53Z","lastTransitionTime":"2026-02-19T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.065120 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.065293 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:53 crc kubenswrapper[4825]: E0219 00:08:53.065499 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.065565 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:53 crc kubenswrapper[4825]: E0219 00:08:53.065693 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:53 crc kubenswrapper[4825]: E0219 00:08:53.065799 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:53 crc kubenswrapper[4825]: E0219 00:08:53.080930 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:53Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.087013 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.087075 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.087094 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.087122 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.087142 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:53Z","lastTransitionTime":"2026-02-19T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:53 crc kubenswrapper[4825]: E0219 00:08:53.105464 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:53Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.111777 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.111824 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.111834 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.111851 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.111863 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:53Z","lastTransitionTime":"2026-02-19T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:53 crc kubenswrapper[4825]: E0219 00:08:53.127112 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:53Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.132903 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.132975 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.132989 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.133004 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.133031 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:53Z","lastTransitionTime":"2026-02-19T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:53 crc kubenswrapper[4825]: E0219 00:08:53.148427 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:53Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.153602 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.153654 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.153674 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.153701 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.153719 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:53Z","lastTransitionTime":"2026-02-19T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:53 crc kubenswrapper[4825]: E0219 00:08:53.174087 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:53Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:53 crc kubenswrapper[4825]: E0219 00:08:53.174320 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.177066 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.177125 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.177145 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.177172 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.177190 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:53Z","lastTransitionTime":"2026-02-19T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.280901 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.280967 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.280994 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.281028 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.281052 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:53Z","lastTransitionTime":"2026-02-19T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.385452 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.385558 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.385582 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.385606 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.385625 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:53Z","lastTransitionTime":"2026-02-19T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.489069 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.489127 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.489143 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.489165 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.489184 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:53Z","lastTransitionTime":"2026-02-19T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.592031 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.592081 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.592095 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.592116 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.592136 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:53Z","lastTransitionTime":"2026-02-19T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.694951 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.695016 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.695036 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.695058 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.695077 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:53Z","lastTransitionTime":"2026-02-19T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.797752 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.797826 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.797849 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.797884 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.797908 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:53Z","lastTransitionTime":"2026-02-19T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.900867 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.900913 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.900927 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.900944 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:53 crc kubenswrapper[4825]: I0219 00:08:53.900955 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:53Z","lastTransitionTime":"2026-02-19T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.004292 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.004350 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.004361 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.004389 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.004404 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:54Z","lastTransitionTime":"2026-02-19T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.045500 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:50:22.336220643 +0000 UTC Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.065797 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:54 crc kubenswrapper[4825]: E0219 00:08:54.065923 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.106629 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.106660 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.106670 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.106682 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.106693 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:54Z","lastTransitionTime":"2026-02-19T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.208763 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.208862 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.208875 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.208888 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.208898 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:54Z","lastTransitionTime":"2026-02-19T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.311424 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.311492 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.311565 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.311601 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.311626 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:54Z","lastTransitionTime":"2026-02-19T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.415278 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.415377 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.415397 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.415421 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.415444 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:54Z","lastTransitionTime":"2026-02-19T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.518289 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.518357 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.518377 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.518403 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.518420 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:54Z","lastTransitionTime":"2026-02-19T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.621702 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.621765 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.621783 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.621808 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.621827 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:54Z","lastTransitionTime":"2026-02-19T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.727858 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.727935 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.727954 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.727980 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.728005 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:54Z","lastTransitionTime":"2026-02-19T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.831607 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.831674 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.831690 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.831713 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.831727 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:54Z","lastTransitionTime":"2026-02-19T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.934064 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.934125 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.934143 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.934167 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:54 crc kubenswrapper[4825]: I0219 00:08:54.934187 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:54Z","lastTransitionTime":"2026-02-19T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.036729 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.036779 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.036791 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.036810 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.036821 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:55Z","lastTransitionTime":"2026-02-19T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.046008 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 02:37:02.326666552 +0000 UTC Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.065444 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.065532 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.065569 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:55 crc kubenswrapper[4825]: E0219 00:08:55.065618 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:55 crc kubenswrapper[4825]: E0219 00:08:55.065695 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:55 crc kubenswrapper[4825]: E0219 00:08:55.065748 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.081026 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.093188 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.107600 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.121175 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.135845 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9505df7615b5027220ed25ee309b0a066503c64d3f8f45ef3fc23de1af2ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:44Z\\\",\\\"message\\\":\\\"2026-02-19T00:07:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2ae34d03-1e68-4bc9-810f-e950f6e3733c\\\\n2026-02-19T00:07:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2ae34d03-1e68-4bc9-810f-e950f6e3733c to /host/opt/cni/bin/\\\\n2026-02-19T00:07:59Z [verbose] multus-daemon started\\\\n2026-02-19T00:07:59Z [verbose] Readiness Indicator file check\\\\n2026-02-19T00:08:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.139772 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.139838 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.139862 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.139887 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.139904 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:55Z","lastTransitionTime":"2026-02-19T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.147694 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.165262 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.180271 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.191474 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7163b7fc-078f-4584-b38a-07ca1c80a2f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bb2cff8e4fc6646f9ef1474f78f40ba15d5a6ebc3a650f358f0dc714a70652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.204935 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.218618 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.233288 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.242791 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.242838 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.242850 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.242869 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.242880 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:55Z","lastTransitionTime":"2026-02-19T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.246376 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.259850 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3482bfb-9d88-41c4-b80c-8152e044df34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3399fe5505d69fda5547fa2b30a745b1e14c3b4efc70d848b052db43fd8d65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3479bc97f163b6b20bd4ff73f1b4c6c4a984f626a5ab0bfbf38ea47f03ea88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5ba6ec8bfc6a365ea9af422f7c6ec0479adbeef9d26a7afe8c66f4ba339482b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.272378 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.284032 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.304265 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:45Z\\\",\\\"message\\\":\\\"IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0219 00:08:45.924480 6837 services_controller.go:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 2.595542ms\\\\nI0219 00:08:45.924553 6837 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0219 00:08:45.924578 6837 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0219 00:08:45.924369 6837 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.316669 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:08:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.346219 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.346270 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.346285 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.346303 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.346316 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:55Z","lastTransitionTime":"2026-02-19T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.448743 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.448811 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.448823 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.448840 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.448854 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:55Z","lastTransitionTime":"2026-02-19T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.551348 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.551403 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.551418 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.551439 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.551454 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:55Z","lastTransitionTime":"2026-02-19T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.654270 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.654339 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.654351 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.654371 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.654387 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:55Z","lastTransitionTime":"2026-02-19T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.756484 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.756581 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.756600 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.756628 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.756647 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:55Z","lastTransitionTime":"2026-02-19T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.859904 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.860046 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.860071 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.860097 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.860116 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:55Z","lastTransitionTime":"2026-02-19T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.962923 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.963006 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.963031 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.963061 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:55 crc kubenswrapper[4825]: I0219 00:08:55.963084 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:55Z","lastTransitionTime":"2026-02-19T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.047073 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 23:40:12.188034684 +0000 UTC Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.065674 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:56 crc kubenswrapper[4825]: E0219 00:08:56.065857 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.067053 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.067085 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.067096 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.067110 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.067121 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:56Z","lastTransitionTime":"2026-02-19T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.171923 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.171973 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.171984 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.171999 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.172010 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:56Z","lastTransitionTime":"2026-02-19T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.275386 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.275451 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.275471 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.275496 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.275546 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:56Z","lastTransitionTime":"2026-02-19T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.379798 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.379883 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.379903 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.379931 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.379959 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:56Z","lastTransitionTime":"2026-02-19T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.483813 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.483884 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.483904 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.483929 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.483949 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:56Z","lastTransitionTime":"2026-02-19T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.587953 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.588011 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.588033 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.588062 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.588087 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:56Z","lastTransitionTime":"2026-02-19T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.690682 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.690747 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.690766 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.690789 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.690807 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:56Z","lastTransitionTime":"2026-02-19T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.793952 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.794027 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.794045 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.794070 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.794088 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:56Z","lastTransitionTime":"2026-02-19T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.896670 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.896779 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.896795 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.896819 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.896836 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:56Z","lastTransitionTime":"2026-02-19T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.999480 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.999555 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.999566 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:56 crc kubenswrapper[4825]: I0219 00:08:56.999584 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:56.999595 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:56Z","lastTransitionTime":"2026-02-19T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.047455 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:44:17.777119103 +0000 UTC Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.065154 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.065296 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.065313 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:57 crc kubenswrapper[4825]: E0219 00:08:57.065468 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:57 crc kubenswrapper[4825]: E0219 00:08:57.065602 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:57 crc kubenswrapper[4825]: E0219 00:08:57.065889 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.103463 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.103574 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.103601 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.103636 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.103660 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:57Z","lastTransitionTime":"2026-02-19T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.207992 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.208084 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.208106 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.208129 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.208147 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:57Z","lastTransitionTime":"2026-02-19T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.310970 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.311021 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.311036 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.311056 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.311072 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:57Z","lastTransitionTime":"2026-02-19T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.414580 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.414639 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.414658 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.414677 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.414693 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:57Z","lastTransitionTime":"2026-02-19T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.517867 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.517926 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.517951 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.517973 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.517987 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:57Z","lastTransitionTime":"2026-02-19T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.621878 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.621942 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.621962 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.621986 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.622005 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:57Z","lastTransitionTime":"2026-02-19T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.724829 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.724886 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.724914 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.724946 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.724970 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:57Z","lastTransitionTime":"2026-02-19T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.828315 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.828425 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.828445 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.828478 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.828499 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:57Z","lastTransitionTime":"2026-02-19T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.932809 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.932888 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.932911 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.932943 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:57 crc kubenswrapper[4825]: I0219 00:08:57.932966 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:57Z","lastTransitionTime":"2026-02-19T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.036426 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.036557 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.036577 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.036602 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.036624 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:58Z","lastTransitionTime":"2026-02-19T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.048289 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:54:04.835402894 +0000 UTC Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.064946 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:08:58 crc kubenswrapper[4825]: E0219 00:08:58.065175 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.140652 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.141215 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.141235 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.141258 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.141276 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:58Z","lastTransitionTime":"2026-02-19T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.244453 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.244604 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.244627 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.244654 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.244671 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:58Z","lastTransitionTime":"2026-02-19T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.347900 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.347979 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.348001 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.348030 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.348048 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:58Z","lastTransitionTime":"2026-02-19T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.451099 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.451176 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.451201 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.451248 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.451272 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:58Z","lastTransitionTime":"2026-02-19T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.554932 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.555003 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.555024 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.555050 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.555078 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:58Z","lastTransitionTime":"2026-02-19T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.658873 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.658959 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.658984 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.659014 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.659037 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:58Z","lastTransitionTime":"2026-02-19T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.762725 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.762798 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.762814 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.762842 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.762859 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:58Z","lastTransitionTime":"2026-02-19T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.865324 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.865394 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.865413 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.865437 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.865455 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:58Z","lastTransitionTime":"2026-02-19T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.874176 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.874312 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:58 crc kubenswrapper[4825]: E0219 00:08:58.874340 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.874307428 +0000 UTC m=+148.565273515 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.874430 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.874492 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.874560 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:58 crc kubenswrapper[4825]: E0219 00:08:58.874566 4825 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:08:58 crc kubenswrapper[4825]: E0219 00:08:58.874652 4825 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:08:58 crc kubenswrapper[4825]: E0219 00:08:58.874684 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.874647388 +0000 UTC m=+148.565613475 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:08:58 crc kubenswrapper[4825]: E0219 00:08:58.874749 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:08:58 crc kubenswrapper[4825]: E0219 00:08:58.874789 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:08:58 crc kubenswrapper[4825]: E0219 00:08:58.874811 4825 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:08:58 crc kubenswrapper[4825]: E0219 00:08:58.874844 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:08:58 crc kubenswrapper[4825]: E0219 00:08:58.874882 4825 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:08:58 crc kubenswrapper[4825]: E0219 00:08:58.874901 4825 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:08:58 crc kubenswrapper[4825]: E0219 00:08:58.874768 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.87474555 +0000 UTC m=+148.565711667 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:08:58 crc kubenswrapper[4825]: E0219 00:08:58.875100 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.875002867 +0000 UTC m=+148.565968934 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:08:58 crc kubenswrapper[4825]: E0219 00:08:58.875129 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.87511821 +0000 UTC m=+148.566084277 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.968699 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.968777 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.968796 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.968821 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:58 crc kubenswrapper[4825]: I0219 00:08:58.968840 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:58Z","lastTransitionTime":"2026-02-19T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.049469 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:30:11.405487165 +0000 UTC Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.064970 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.064989 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:59 crc kubenswrapper[4825]: E0219 00:08:59.065243 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.065276 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:59 crc kubenswrapper[4825]: E0219 00:08:59.065648 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:59 crc kubenswrapper[4825]: E0219 00:08:59.066071 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.066292 4825 scope.go:117] "RemoveContainer" containerID="28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f" Feb 19 00:08:59 crc kubenswrapper[4825]: E0219 00:08:59.066585 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.071612 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.071664 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.071682 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.071703 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.071719 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:59Z","lastTransitionTime":"2026-02-19T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.082028 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.173870 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.173912 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.173940 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.173953 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.173963 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:59Z","lastTransitionTime":"2026-02-19T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.277501 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.277582 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.277601 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.277626 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.277647 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:59Z","lastTransitionTime":"2026-02-19T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.381910 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.382007 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.382027 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.382051 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.382068 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:59Z","lastTransitionTime":"2026-02-19T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.485004 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.485074 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.485093 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.485120 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.485139 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:59Z","lastTransitionTime":"2026-02-19T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.589021 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.589099 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.589119 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.589144 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.589163 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:59Z","lastTransitionTime":"2026-02-19T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.692937 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.693010 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.693028 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.693060 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.693084 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:59Z","lastTransitionTime":"2026-02-19T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.796089 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.796163 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.796175 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.796194 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.796207 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:59Z","lastTransitionTime":"2026-02-19T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.899329 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.899382 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.899392 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.899412 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:08:59 crc kubenswrapper[4825]: I0219 00:08:59.899426 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:08:59Z","lastTransitionTime":"2026-02-19T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.003170 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.003212 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.003222 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.003237 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.003249 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:00Z","lastTransitionTime":"2026-02-19T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.050189 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:29:07.148799542 +0000 UTC Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.065978 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:00 crc kubenswrapper[4825]: E0219 00:09:00.066204 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.107010 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.107111 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.107124 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.107146 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.107160 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:00Z","lastTransitionTime":"2026-02-19T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.209249 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.209349 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.209367 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.209396 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.209414 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:00Z","lastTransitionTime":"2026-02-19T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.313097 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.313152 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.313162 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.313178 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.313210 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:00Z","lastTransitionTime":"2026-02-19T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.416891 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.416960 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.416977 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.417004 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.417031 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:00Z","lastTransitionTime":"2026-02-19T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.520006 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.520069 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.520087 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.520126 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.520147 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:00Z","lastTransitionTime":"2026-02-19T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.624037 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.624079 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.624087 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.624101 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.624112 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:00Z","lastTransitionTime":"2026-02-19T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.726898 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.726972 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.726995 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.727024 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.727044 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:00Z","lastTransitionTime":"2026-02-19T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.831176 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.831245 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.831264 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.831296 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.831323 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:00Z","lastTransitionTime":"2026-02-19T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.934230 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.934305 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.934323 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.934353 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:00 crc kubenswrapper[4825]: I0219 00:09:00.934375 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:00Z","lastTransitionTime":"2026-02-19T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.040180 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.040255 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.040282 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.040326 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.040347 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:01Z","lastTransitionTime":"2026-02-19T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.050800 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:57:27.552727712 +0000 UTC Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.065077 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.065111 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.065142 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:01 crc kubenswrapper[4825]: E0219 00:09:01.065326 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:01 crc kubenswrapper[4825]: E0219 00:09:01.065483 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:01 crc kubenswrapper[4825]: E0219 00:09:01.065655 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.144081 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.144168 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.144190 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.144231 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.144259 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:01Z","lastTransitionTime":"2026-02-19T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.247171 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.247252 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.247278 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.247318 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.247348 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:01Z","lastTransitionTime":"2026-02-19T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.350853 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.350927 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.350946 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.350975 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.350997 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:01Z","lastTransitionTime":"2026-02-19T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.455326 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.455406 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.455430 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.455460 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.455482 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:01Z","lastTransitionTime":"2026-02-19T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.559472 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.559599 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.559617 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.559653 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.559693 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:01Z","lastTransitionTime":"2026-02-19T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.663266 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.663317 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.663327 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.663344 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.663354 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:01Z","lastTransitionTime":"2026-02-19T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.766842 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.766895 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.766905 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.766920 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.766929 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:01Z","lastTransitionTime":"2026-02-19T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.870410 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.870583 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.870609 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.870641 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.870660 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:01Z","lastTransitionTime":"2026-02-19T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.974119 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.974213 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.974231 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.974263 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:01 crc kubenswrapper[4825]: I0219 00:09:01.974286 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:01Z","lastTransitionTime":"2026-02-19T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.051018 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:49:05.758659334 +0000 UTC Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.065576 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:02 crc kubenswrapper[4825]: E0219 00:09:02.065851 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.078948 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.079007 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.079025 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.079047 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.079066 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:02Z","lastTransitionTime":"2026-02-19T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.182301 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.182417 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.182475 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.182503 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.182573 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:02Z","lastTransitionTime":"2026-02-19T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.286170 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.286240 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.286265 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.286291 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.286313 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:02Z","lastTransitionTime":"2026-02-19T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.389347 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.389420 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.389443 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.389473 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.389500 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:02Z","lastTransitionTime":"2026-02-19T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.492384 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.492433 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.492651 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.492666 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.492676 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:02Z","lastTransitionTime":"2026-02-19T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.596669 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.596744 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.596761 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.596790 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.596809 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:02Z","lastTransitionTime":"2026-02-19T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.701377 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.701454 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.701472 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.701499 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.701559 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:02Z","lastTransitionTime":"2026-02-19T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.805298 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.805377 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.805402 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.805434 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.805458 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:02Z","lastTransitionTime":"2026-02-19T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.909485 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.909649 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.909670 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.909703 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:02 crc kubenswrapper[4825]: I0219 00:09:02.909726 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:02Z","lastTransitionTime":"2026-02-19T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.013025 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.013112 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.013134 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.013161 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.013179 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:03Z","lastTransitionTime":"2026-02-19T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.051840 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:57:16.084469494 +0000 UTC Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.065359 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.065487 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:03 crc kubenswrapper[4825]: E0219 00:09:03.065615 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.065633 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:03 crc kubenswrapper[4825]: E0219 00:09:03.065800 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:03 crc kubenswrapper[4825]: E0219 00:09:03.066037 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.116370 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.116426 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.116442 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.116465 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.116553 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:03Z","lastTransitionTime":"2026-02-19T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.220433 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.220557 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.220575 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.220599 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.220618 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:03Z","lastTransitionTime":"2026-02-19T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.323923 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.323976 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.323986 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.324012 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.324024 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:03Z","lastTransitionTime":"2026-02-19T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.428406 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.428470 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.428483 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.428500 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.428532 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:03Z","lastTransitionTime":"2026-02-19T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.447931 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.447993 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.448011 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.448037 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.448063 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:03Z","lastTransitionTime":"2026-02-19T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:03 crc kubenswrapper[4825]: E0219 00:09:03.471348 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.477294 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.477375 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.477398 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.477422 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.477442 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:03Z","lastTransitionTime":"2026-02-19T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:03 crc kubenswrapper[4825]: E0219 00:09:03.507767 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.514781 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.514861 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.514882 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.514922 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.514953 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:03Z","lastTransitionTime":"2026-02-19T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:03 crc kubenswrapper[4825]: E0219 00:09:03.537640 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.543193 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.543246 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.543260 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.543281 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.543297 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:03Z","lastTransitionTime":"2026-02-19T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:03 crc kubenswrapper[4825]: E0219 00:09:03.562291 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.567695 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.567761 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.567778 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.567802 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.567821 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:03Z","lastTransitionTime":"2026-02-19T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:03 crc kubenswrapper[4825]: E0219 00:09:03.585931 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:03Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:03 crc kubenswrapper[4825]: E0219 00:09:03.586104 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.588347 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.588385 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.588399 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.588420 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.588438 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:03Z","lastTransitionTime":"2026-02-19T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.691196 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.691257 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.691275 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.691300 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.691348 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:03Z","lastTransitionTime":"2026-02-19T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.796431 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.796536 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.796558 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.796584 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.796605 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:03Z","lastTransitionTime":"2026-02-19T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.900773 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.900829 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.900847 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.900879 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:03 crc kubenswrapper[4825]: I0219 00:09:03.900901 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:03Z","lastTransitionTime":"2026-02-19T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.004700 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.004763 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.004787 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.004814 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.004833 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:04Z","lastTransitionTime":"2026-02-19T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.052807 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:40:53.815160296 +0000 UTC Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.065391 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:04 crc kubenswrapper[4825]: E0219 00:09:04.065650 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.108322 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.108390 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.108409 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.108440 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.108462 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:04Z","lastTransitionTime":"2026-02-19T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.212743 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.212808 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.212825 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.212860 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.212885 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:04Z","lastTransitionTime":"2026-02-19T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.315974 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.316430 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.316745 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.317000 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.317165 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:04Z","lastTransitionTime":"2026-02-19T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.419947 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.420757 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.420810 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.420854 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.420897 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:04Z","lastTransitionTime":"2026-02-19T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.525799 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.525951 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.526019 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.526065 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.526092 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:04Z","lastTransitionTime":"2026-02-19T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.629352 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.629394 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.629406 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.629427 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.629440 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:04Z","lastTransitionTime":"2026-02-19T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.732950 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.733029 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.733051 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.733086 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.733113 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:04Z","lastTransitionTime":"2026-02-19T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.836745 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.836816 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.836830 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.836855 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.836873 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:04Z","lastTransitionTime":"2026-02-19T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.939931 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.939979 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.939994 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.940017 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:04 crc kubenswrapper[4825]: I0219 00:09:04.940030 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:04Z","lastTransitionTime":"2026-02-19T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.044023 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.044086 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.044106 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.044133 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.044154 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:05Z","lastTransitionTime":"2026-02-19T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.053552 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 15:22:08.015223822 +0000 UTC Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.065208 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:05 crc kubenswrapper[4825]: E0219 00:09:05.065355 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.065382 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.065487 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:05 crc kubenswrapper[4825]: E0219 00:09:05.065864 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:05 crc kubenswrapper[4825]: E0219 00:09:05.065967 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.094081 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"926bf6b3-77e1-4fd1-846e-3bb651c25002\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:07:55Z\\\",\\\"message\\\":\\\"le observer\\\\nW0219 00:07:54.850664 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0219 00:07:54.850829 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:07:54.851685 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2677297410/tls.crt::/tmp/serving-cert-2677297410/tls.key\\\\\\\"\\\\nI0219 00:07:55.200843 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:07:55.205176 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:07:55.205196 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:07:55.205219 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:07:55.205224 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:07:55.216031 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:07:55.216132 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216160 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:07:55.216182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0219 00:07:55.216064 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:07:55.216203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:07:55.216269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:07:55.216275 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:07:55.218460 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.110907 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f526c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85963def-84d2-4e82-a252-8d8389151c81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0ec4dc4121baca345345062207777e486ae688c3dc827ea921d5c45d12134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hrl6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f526c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.125684 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zfx7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2daa6777-c1b1-4fae-9c14-cfe10867288a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9505df7615b5027220ed25ee309b0a066503c64d3f8f45ef3fc23de1af2ac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:44Z\\\",\\\"message\\\":\\\"2026-02-19T00:07:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2ae34d03-1e68-4bc9-810f-e950f6e3733c\\\\n2026-02-19T00:07:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2ae34d03-1e68-4bc9-810f-e950f6e3733c to /host/opt/cni/bin/\\\\n2026-02-19T00:07:59Z [verbose] multus-daemon started\\\\n2026-02-19T00:07:59Z [verbose] Readiness Indicator file check\\\\n2026-02-19T00:08:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rqrhz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zfx7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.141937 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vpm6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a723093-6f53-4ca7-aa56-53ff684e90bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10aa23fad45cff564d3dc1b4a0d7ce0372d06f81a9d701b203668697dfad5f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84md\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vpm6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.147855 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.147930 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.147947 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.147973 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.147998 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:05Z","lastTransitionTime":"2026-02-19T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.163670 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7163b7fc-078f-4584-b38a-07ca1c80a2f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bb2cff8e4fc6646f9ef1474f78f40ba15d5a6ebc3a650f358f0dc714a70652\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dd53591912f8216daa5b7094c9772e9690a4aa812f7259e5e4d1882bc70865b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.200695 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba67f7a8-bb2b-45c8-96a7-faa30001f98a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa76390f44634ac033afdc280a3a6d7dc7ad22f6a27e0d2028436be587331e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9c7ca92a9aef8f55e86012761e8fa21f37f11c1b35ec242adb9d225b72c0e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e246d824ffd8501597c1c42fc00a61a18957688d96291a4e69cd4bd8033304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c66bdbf514b0e388f10749f473bda6644126662866a7c76a481f4ec2e046a0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d5455060a39bc50828900861466a48315ac23309be804f6462f2399449881f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aac5e64f80becccbeb75b9d7c8ea3c9eb6130da2ef997f35f432e929150aa10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3aac5e64f80becccbeb75b9d7c8ea3c9eb6130da2ef997f35f432e929150aa10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8120c8cc3f0d62fe1f26cd01914563f0665f0bb830e70e90ac6f64709e502c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8120c8cc3f0d62fe1f26cd01914563f0665f0bb830e70e90ac6f64709e502c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ffb9515efa6c88c2a367aa9f6e46660aa5d40a7c70ad330ef7f706f18493da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ffb9515efa6c88c2a367aa9f6e46660aa5d40a7c70ad330ef7f706f18493da9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.217311 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42728ae3-5b24-4a19-acad-2e758919d44d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df82cc5561b2201f06dcffd4cba8f816f23348dba5e28212ef6acf5052f46b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12612531e1d06f5556cccb09a044d27d736cc0f1eaf20bf82d02c705304a22a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b589d26d527ba74028e71d1dffb68cc90c1f0945d7bcd21ab60cdb9c3197a68\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.233260 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecea14568d67e8484ed8dc50f531d7f857b018d792068269d95cd90be3af2ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.249227 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1240fd52063c821713af8a9e7a4d8f3f7fd6a101e1f0642328693cc078fb2e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cdc6db9ebc81bda5a1c0b235a8e3ee3daec8f1965e436075be625fcbe618557\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.251949 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.252018 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.252033 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.252059 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.252074 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:05Z","lastTransitionTime":"2026-02-19T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.266954 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.285128 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efe56e91-46ea-4365-8dc4-643fafea609a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://681bf575c4d0b8c282987d67eaabc6e8f972ef8ecce1bf266d316e73fde5c881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9679b8495c5bf7948e9bdb910056e30d661cfb925c765256046e6ac40ecfc35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e115f8159c221bc352fdc6a0121c028746c342813bc51cbd60bd5a84212f07f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05125f3150f9f8093036e975e399a31bac110e27affdd772006d2b9501bde1af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://885c52cdc4948aa1e47740ab4f5d3108aa68142b6379e07528df06f2bfaf5cfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2019d34cc4d0eb98a49114c6d945c0b2745ef5dbcb3bf9635763fe21b0846f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce4b4b7ac9b25c9048c08a2ba2c2f7edeea289d6601f095db502af433137193\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8s5kg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lb5zm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.299956 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80aa664d-e111-41f6-815d-f4185e1f72ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmbbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:12Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bhnmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.315429 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3482bfb-9d88-41c4-b80c-8152e044df34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3399fe5505d69fda5547fa2b30a745b1e14c3b4efc70d848b052db43fd8d65a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3479bc97f163b6b20bd4ff73f1b4c6c4a984f626a5ab0bfbf38ea47f03ea88a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5ba6ec8bfc6a365ea9af422f7c6ec0479adbeef9d26a7afe8c66f4ba339482b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ddf04fba00c4ec4e1eed1942ec9a3fb84d951437553481d787c701eff825aea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:35Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.330771 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.344971 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34a5e4cfb074f2a563db9c99ff3cb9291f1e5fca4a4cbbce4664a197942ae7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9899\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-tggq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.354757 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.354814 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.354830 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.354851 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.354866 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:05Z","lastTransitionTime":"2026-02-19T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.374092 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c24ef0e-b402-4585-a79a-6b98b9896f5a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:08:45Z\\\",\\\"message\\\":\\\"IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0219 00:08:45.924480 6837 services_controller.go:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 2.595542ms\\\\nI0219 00:08:45.924553 6837 lb_config.go:1031] Cluster endpoints for openshift-kube-controller-manager-operator/metrics for network=default are: map[]\\\\nI0219 00:08:45.924578 6837 services_controller.go:356] Processing sync for service openshift-machine-api/machine-api-operator for network=default\\\\nF0219 00:08:45.924369 6837 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:08:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sc4zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:07:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bdpln\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.387861 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf7738b0-f0ce-4b7c-85fb-0c4fc9ce443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8647e52579f1359fc76e16c54ace94671c09628615e12b61d3ecff41ce71ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95d7a84e3ca4855d8ed65e1ad44f67d39c400a14ead7ae545b99e7e91b24d96e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5f6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:08:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vhfl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.402475 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.418583 4825 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74712dc98d4ac9024e7f002b99fc853997e4be4674c86e66a5578c7a84ff39db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.458544 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.459679 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.459719 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.459757 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.459786 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:05Z","lastTransitionTime":"2026-02-19T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.563190 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.563607 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.563848 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.564376 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.564724 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:05Z","lastTransitionTime":"2026-02-19T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.668304 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.668359 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.668379 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.668405 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.668422 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:05Z","lastTransitionTime":"2026-02-19T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.771476 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.771626 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.771658 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.771693 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.771718 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:05Z","lastTransitionTime":"2026-02-19T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.875555 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.875649 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.875680 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.875715 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.875742 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:05Z","lastTransitionTime":"2026-02-19T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.979925 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.979969 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.979982 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.980004 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:05 crc kubenswrapper[4825]: I0219 00:09:05.980018 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:05Z","lastTransitionTime":"2026-02-19T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.053731 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:50:16.815734962 +0000 UTC Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.065325 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:06 crc kubenswrapper[4825]: E0219 00:09:06.065652 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.083645 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.083688 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.083700 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.083729 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.083744 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:06Z","lastTransitionTime":"2026-02-19T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.188976 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.189025 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.189081 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.189111 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.189125 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:06Z","lastTransitionTime":"2026-02-19T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.292435 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.292934 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.293011 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.293095 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.293162 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:06Z","lastTransitionTime":"2026-02-19T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.397178 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.397244 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.397264 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.397294 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.397321 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:06Z","lastTransitionTime":"2026-02-19T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.500990 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.501044 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.501058 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.501080 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.501095 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:06Z","lastTransitionTime":"2026-02-19T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.604350 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.604431 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.604456 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.604490 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.604572 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:06Z","lastTransitionTime":"2026-02-19T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.707630 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.708004 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.708101 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.708204 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.708302 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:06Z","lastTransitionTime":"2026-02-19T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.811773 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.811858 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.811880 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.811908 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.811930 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:06Z","lastTransitionTime":"2026-02-19T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.914951 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.915028 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.915045 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.915076 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:06 crc kubenswrapper[4825]: I0219 00:09:06.915092 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:06Z","lastTransitionTime":"2026-02-19T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.018356 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.018425 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.018443 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.018471 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.018493 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:07Z","lastTransitionTime":"2026-02-19T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.054865 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 00:01:24.527454704 +0000 UTC Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.065640 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.065718 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:07 crc kubenswrapper[4825]: E0219 00:09:07.065796 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.065727 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:07 crc kubenswrapper[4825]: E0219 00:09:07.065915 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:07 crc kubenswrapper[4825]: E0219 00:09:07.066025 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.122098 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.122169 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.122184 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.122213 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.122231 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:07Z","lastTransitionTime":"2026-02-19T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.225659 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.225766 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.225788 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.225817 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.225841 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:07Z","lastTransitionTime":"2026-02-19T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.329952 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.330077 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.330164 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.330261 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.330296 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:07Z","lastTransitionTime":"2026-02-19T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.433641 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.433704 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.433723 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.433750 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.433771 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:07Z","lastTransitionTime":"2026-02-19T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.537454 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.537560 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.537584 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.537615 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.537635 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:07Z","lastTransitionTime":"2026-02-19T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.641387 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.641948 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.642138 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.642499 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.642684 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:07Z","lastTransitionTime":"2026-02-19T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.745908 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.745987 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.746011 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.746040 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.746062 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:07Z","lastTransitionTime":"2026-02-19T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.849935 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.849986 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.849999 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.850020 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.850032 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:07Z","lastTransitionTime":"2026-02-19T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.953960 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.954147 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.954168 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.954224 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:07 crc kubenswrapper[4825]: I0219 00:09:07.954246 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:07Z","lastTransitionTime":"2026-02-19T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.055726 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:28:44.134815044 +0000 UTC Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.058754 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.058830 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.058849 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.058880 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.058900 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:08Z","lastTransitionTime":"2026-02-19T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.065238 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:08 crc kubenswrapper[4825]: E0219 00:09:08.065863 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.162487 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.163023 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.163042 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.163070 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.163091 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:08Z","lastTransitionTime":"2026-02-19T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.266834 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.266912 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.266929 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.266959 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.266978 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:08Z","lastTransitionTime":"2026-02-19T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.371896 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.371980 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.372002 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.372036 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.372056 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:08Z","lastTransitionTime":"2026-02-19T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.479409 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.479473 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.479488 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.479556 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.479580 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:08Z","lastTransitionTime":"2026-02-19T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.583707 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.583768 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.583782 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.583803 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.583817 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:08Z","lastTransitionTime":"2026-02-19T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.686880 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.686948 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.686965 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.686990 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.687010 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:08Z","lastTransitionTime":"2026-02-19T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.790941 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.791078 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.791134 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.791163 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.791214 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:08Z","lastTransitionTime":"2026-02-19T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.895221 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.895280 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.895293 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.895317 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.895334 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:08Z","lastTransitionTime":"2026-02-19T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.998807 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.999423 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.999635 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.999806 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:08 crc kubenswrapper[4825]: I0219 00:09:08.999939 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:08Z","lastTransitionTime":"2026-02-19T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.056713 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:33:24.38280041 +0000 UTC Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.066091 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.066172 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.066091 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:09 crc kubenswrapper[4825]: E0219 00:09:09.066306 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:09 crc kubenswrapper[4825]: E0219 00:09:09.066642 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:09 crc kubenswrapper[4825]: E0219 00:09:09.066690 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.103577 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.103646 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.103665 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.103692 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.103722 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:09Z","lastTransitionTime":"2026-02-19T00:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.217200 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.217262 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.217283 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.217311 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.217332 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:09Z","lastTransitionTime":"2026-02-19T00:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.320827 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.320897 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.320923 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.320957 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.320980 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:09Z","lastTransitionTime":"2026-02-19T00:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.423241 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.423619 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.423689 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.423765 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.423845 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:09Z","lastTransitionTime":"2026-02-19T00:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.526124 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.526160 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.526171 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.526186 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.526196 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:09Z","lastTransitionTime":"2026-02-19T00:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.628796 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.628832 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.628843 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.628862 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.628873 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:09Z","lastTransitionTime":"2026-02-19T00:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.732650 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.732708 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.732727 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.732750 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.732766 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:09Z","lastTransitionTime":"2026-02-19T00:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.836953 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.837023 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.837044 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.837072 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.837091 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:09Z","lastTransitionTime":"2026-02-19T00:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.940665 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.941242 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.941401 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.941591 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:09 crc kubenswrapper[4825]: I0219 00:09:09.941768 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:09Z","lastTransitionTime":"2026-02-19T00:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.046155 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.046280 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.046301 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.046332 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.046354 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:10Z","lastTransitionTime":"2026-02-19T00:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.057437 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 11:57:28.320997369 +0000 UTC Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.065178 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:10 crc kubenswrapper[4825]: E0219 00:09:10.065395 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.150734 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.150818 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.150843 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.150879 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.150905 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:10Z","lastTransitionTime":"2026-02-19T00:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.255313 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.255394 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.255418 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.255448 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.255468 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:10Z","lastTransitionTime":"2026-02-19T00:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.358856 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.358923 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.358941 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.358966 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.358982 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:10Z","lastTransitionTime":"2026-02-19T00:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.462566 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.462632 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.462650 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.462683 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.462702 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:10Z","lastTransitionTime":"2026-02-19T00:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.567103 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.567170 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.567181 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.567202 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.567218 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:10Z","lastTransitionTime":"2026-02-19T00:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.670599 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.670688 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.670702 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.670723 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.670763 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:10Z","lastTransitionTime":"2026-02-19T00:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.774170 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.774252 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.774276 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.774309 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.774333 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:10Z","lastTransitionTime":"2026-02-19T00:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.877777 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.877869 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.877892 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.877928 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.877958 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:10Z","lastTransitionTime":"2026-02-19T00:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.981708 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.981780 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.981808 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.981838 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:10 crc kubenswrapper[4825]: I0219 00:09:10.981858 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:10Z","lastTransitionTime":"2026-02-19T00:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.057850 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:50:15.721538025 +0000 UTC Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.065371 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:11 crc kubenswrapper[4825]: E0219 00:09:11.065709 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.065771 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.065871 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:11 crc kubenswrapper[4825]: E0219 00:09:11.066384 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:11 crc kubenswrapper[4825]: E0219 00:09:11.066600 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.066916 4825 scope.go:117] "RemoveContainer" containerID="28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f" Feb 19 00:09:11 crc kubenswrapper[4825]: E0219 00:09:11.067273 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.084364 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.084426 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.084452 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.084479 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.084537 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:11Z","lastTransitionTime":"2026-02-19T00:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.188696 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.188773 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.188790 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.188818 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.188836 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:11Z","lastTransitionTime":"2026-02-19T00:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.293079 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.293168 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.293193 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.293232 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.293256 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:11Z","lastTransitionTime":"2026-02-19T00:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.396467 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.396555 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.396572 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.396598 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.396617 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:11Z","lastTransitionTime":"2026-02-19T00:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.499797 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.499862 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.499874 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.499893 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.499905 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:11Z","lastTransitionTime":"2026-02-19T00:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.603272 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.603334 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.603354 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.603493 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.603557 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:11Z","lastTransitionTime":"2026-02-19T00:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.705747 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.705785 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.705797 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.705815 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.705827 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:11Z","lastTransitionTime":"2026-02-19T00:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.809873 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.809946 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.809966 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.809992 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.810012 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:11Z","lastTransitionTime":"2026-02-19T00:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.914360 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.914438 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.914457 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.914493 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:11 crc kubenswrapper[4825]: I0219 00:09:11.914540 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:11Z","lastTransitionTime":"2026-02-19T00:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.018472 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.018626 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.018645 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.018673 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.018693 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:12Z","lastTransitionTime":"2026-02-19T00:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.058429 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 16:25:55.698653165 +0000 UTC Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.065883 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:12 crc kubenswrapper[4825]: E0219 00:09:12.066073 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.123225 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.123309 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.123327 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.123353 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.123369 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:12Z","lastTransitionTime":"2026-02-19T00:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.227290 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.227387 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.227413 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.227449 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.227472 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:12Z","lastTransitionTime":"2026-02-19T00:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.332204 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.332279 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.332297 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.332318 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.332330 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:12Z","lastTransitionTime":"2026-02-19T00:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.435864 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.435923 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.435941 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.435966 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.435984 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:12Z","lastTransitionTime":"2026-02-19T00:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.539570 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.539960 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.540216 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.540368 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.540544 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:12Z","lastTransitionTime":"2026-02-19T00:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.643910 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.643963 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.643981 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.644000 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.644014 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:12Z","lastTransitionTime":"2026-02-19T00:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.747744 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.747820 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.747832 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.747848 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.747862 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:12Z","lastTransitionTime":"2026-02-19T00:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.851757 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.851850 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.852064 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.852101 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.852126 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:12Z","lastTransitionTime":"2026-02-19T00:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.955289 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.955821 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.956016 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.956172 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:12 crc kubenswrapper[4825]: I0219 00:09:12.956309 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:12Z","lastTransitionTime":"2026-02-19T00:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.059084 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 12:48:12.661411432 +0000 UTC Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.059755 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.059828 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.059864 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.059907 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.059932 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:13Z","lastTransitionTime":"2026-02-19T00:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.065155 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:13 crc kubenswrapper[4825]: E0219 00:09:13.065329 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.065448 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.065639 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:13 crc kubenswrapper[4825]: E0219 00:09:13.066004 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:13 crc kubenswrapper[4825]: E0219 00:09:13.066260 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.163030 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.163566 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.163742 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.163956 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.164161 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:13Z","lastTransitionTime":"2026-02-19T00:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.267350 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.267440 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.267460 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.267486 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.267534 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:13Z","lastTransitionTime":"2026-02-19T00:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.370924 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.371006 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.371024 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.371059 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.371077 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:13Z","lastTransitionTime":"2026-02-19T00:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.473899 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.473947 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.473965 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.473991 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.474010 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:13Z","lastTransitionTime":"2026-02-19T00:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.577817 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.577874 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.577884 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.577905 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.577918 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:13Z","lastTransitionTime":"2026-02-19T00:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.675624 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.675729 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.675751 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.675786 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.675806 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:13Z","lastTransitionTime":"2026-02-19T00:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:13 crc kubenswrapper[4825]: E0219 00:09:13.693966 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.700336 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.700386 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.700402 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.700426 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.700441 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:13Z","lastTransitionTime":"2026-02-19T00:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:13 crc kubenswrapper[4825]: E0219 00:09:13.725368 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.732098 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.732208 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.732226 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.732283 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.732301 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:13Z","lastTransitionTime":"2026-02-19T00:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:13 crc kubenswrapper[4825]: E0219 00:09:13.756480 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.763046 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.763136 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.763158 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.763185 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.763205 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:13Z","lastTransitionTime":"2026-02-19T00:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:13 crc kubenswrapper[4825]: E0219 00:09:13.785629 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.806444 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.806553 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.806585 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.806617 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.806636 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:13Z","lastTransitionTime":"2026-02-19T00:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:13 crc kubenswrapper[4825]: E0219 00:09:13.826838 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7148ccf5-3e32-4158-b08b-88a47cea7ade\\\",\\\"systemUUID\\\":\\\"5c6d8be3-81c6-4c6a-89a0-311f75474e3b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:09:13Z is after 2025-08-24T17:21:41Z" Feb 19 00:09:13 crc kubenswrapper[4825]: E0219 00:09:13.827071 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.829277 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.829579 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.829684 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.829788 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.830073 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:13Z","lastTransitionTime":"2026-02-19T00:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.934272 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.934823 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.934978 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.935114 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:13 crc kubenswrapper[4825]: I0219 00:09:13.935255 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:13Z","lastTransitionTime":"2026-02-19T00:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.039629 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.039733 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.039753 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.039783 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.039803 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:14Z","lastTransitionTime":"2026-02-19T00:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.060236 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 07:07:33.877729877 +0000 UTC Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.065822 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:14 crc kubenswrapper[4825]: E0219 00:09:14.066429 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.143364 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.143411 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.143445 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.143466 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.143479 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:14Z","lastTransitionTime":"2026-02-19T00:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.247304 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.247369 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.247385 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.247407 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.247422 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:14Z","lastTransitionTime":"2026-02-19T00:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.350185 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.350235 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.350247 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.350264 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.350277 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:14Z","lastTransitionTime":"2026-02-19T00:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.453723 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.453799 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.453818 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.453858 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.453879 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:14Z","lastTransitionTime":"2026-02-19T00:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.563152 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.563215 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.563232 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.563255 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.563269 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:14Z","lastTransitionTime":"2026-02-19T00:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.665907 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.665954 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.665966 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.665985 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.666002 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:14Z","lastTransitionTime":"2026-02-19T00:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.768177 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.768260 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.768279 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.768305 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.768326 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:14Z","lastTransitionTime":"2026-02-19T00:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.871918 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.871976 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.871989 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.872013 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.872029 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:14Z","lastTransitionTime":"2026-02-19T00:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.974966 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.975058 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.975085 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.975118 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:14 crc kubenswrapper[4825]: I0219 00:09:14.975165 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:14Z","lastTransitionTime":"2026-02-19T00:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.061128 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:44:37.314324849 +0000 UTC Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.065803 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.065815 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:15 crc kubenswrapper[4825]: E0219 00:09:15.066055 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.065815 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:15 crc kubenswrapper[4825]: E0219 00:09:15.066304 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:15 crc kubenswrapper[4825]: E0219 00:09:15.066369 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.078236 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.078287 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.078310 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.078334 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.078354 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:15Z","lastTransitionTime":"2026-02-19T00:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.173401 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-f526c" podStartSLOduration=79.173365138 podStartE2EDuration="1m19.173365138s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:15.172956527 +0000 UTC m=+100.863922604" watchObservedRunningTime="2026-02-19 00:09:15.173365138 +0000 UTC m=+100.864331215" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.181740 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.181829 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.182396 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.182439 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.182469 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:15Z","lastTransitionTime":"2026-02-19T00:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.223481 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vpm6d" podStartSLOduration=79.22344115 podStartE2EDuration="1m19.22344115s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:15.222393681 +0000 UTC m=+100.913359768" watchObservedRunningTime="2026-02-19 00:09:15.22344115 +0000 UTC m=+100.914407237" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.224177 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-zfx7x" podStartSLOduration=79.224156029 podStartE2EDuration="1m19.224156029s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:15.204045305 +0000 UTC m=+100.895011662" watchObservedRunningTime="2026-02-19 00:09:15.224156029 +0000 UTC m=+100.915122116" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.255081 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.255036912 podStartE2EDuration="1m19.255036912s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:15.25460091 +0000 UTC m=+100.945567007" watchObservedRunningTime="2026-02-19 00:09:15.255036912 +0000 UTC m=+100.946002999" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.286351 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.286420 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.286439 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.286468 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.286488 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:15Z","lastTransitionTime":"2026-02-19T00:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.302869 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=16.302840581 podStartE2EDuration="16.302840581s" podCreationTimestamp="2026-02-19 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:15.3020672 +0000 UTC m=+100.993033277" watchObservedRunningTime="2026-02-19 00:09:15.302840581 +0000 UTC m=+100.993806638" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.337921 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=76.337895339 podStartE2EDuration="1m16.337895339s" podCreationTimestamp="2026-02-19 00:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:15.337760845 +0000 UTC m=+101.028726932" watchObservedRunningTime="2026-02-19 00:09:15.337895339 +0000 UTC m=+101.028861456" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.390116 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.390189 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.390210 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.390240 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.390262 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:15Z","lastTransitionTime":"2026-02-19T00:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.415703 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lb5zm" podStartSLOduration=79.415677406 podStartE2EDuration="1m19.415677406s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:15.415545002 +0000 UTC m=+101.106511049" watchObservedRunningTime="2026-02-19 00:09:15.415677406 +0000 UTC m=+101.106643453" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.490607 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.490571972 podStartE2EDuration="31.490571972s" podCreationTimestamp="2026-02-19 00:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:15.473542412 +0000 UTC m=+101.164508489" watchObservedRunningTime="2026-02-19 00:09:15.490571972 +0000 UTC m=+101.181538049" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.493833 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.493893 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.493908 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.493930 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.493943 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:15Z","lastTransitionTime":"2026-02-19T00:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.509697 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podStartSLOduration=79.509664839 podStartE2EDuration="1m19.509664839s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:15.509216287 +0000 UTC m=+101.200182334" watchObservedRunningTime="2026-02-19 00:09:15.509664839 +0000 UTC m=+101.200630896" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.551318 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vhfl7" podStartSLOduration=78.551287928 podStartE2EDuration="1m18.551287928s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:15.55100291 +0000 UTC m=+101.241968957" watchObservedRunningTime="2026-02-19 00:09:15.551287928 +0000 UTC m=+101.242253975" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.567738 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.567713132 podStartE2EDuration="48.567713132s" podCreationTimestamp="2026-02-19 00:08:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:15.567326481 +0000 UTC m=+101.258292528" watchObservedRunningTime="2026-02-19 00:09:15.567713132 +0000 UTC m=+101.258679169" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.596539 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.596941 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.597006 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.597072 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.597147 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:15Z","lastTransitionTime":"2026-02-19T00:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.700687 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.700745 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.700762 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.700788 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.700805 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:15Z","lastTransitionTime":"2026-02-19T00:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.804304 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.804354 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.804371 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.804394 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.804409 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:15Z","lastTransitionTime":"2026-02-19T00:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.907653 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.907731 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.907762 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.907791 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:15 crc kubenswrapper[4825]: I0219 00:09:15.907815 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:15Z","lastTransitionTime":"2026-02-19T00:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.012388 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.012450 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.012464 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.012485 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.012537 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:16Z","lastTransitionTime":"2026-02-19T00:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.061453 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:20:37.092531417 +0000 UTC Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.065956 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:16 crc kubenswrapper[4825]: E0219 00:09:16.066207 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.099662 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs\") pod \"network-metrics-daemon-bhnmw\" (UID: \"80aa664d-e111-41f6-815d-f4185e1f72ff\") " pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:16 crc kubenswrapper[4825]: E0219 00:09:16.099893 4825 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:09:16 crc kubenswrapper[4825]: E0219 00:09:16.100002 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs podName:80aa664d-e111-41f6-815d-f4185e1f72ff nodeName:}" failed. No retries permitted until 2026-02-19 00:10:20.099969001 +0000 UTC m=+165.790935088 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs") pod "network-metrics-daemon-bhnmw" (UID: "80aa664d-e111-41f6-815d-f4185e1f72ff") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.115745 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.115792 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.115805 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.115827 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.115843 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:16Z","lastTransitionTime":"2026-02-19T00:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.219278 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.219362 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.219384 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.219417 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.219440 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:16Z","lastTransitionTime":"2026-02-19T00:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.323402 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.323476 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.323493 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.323558 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.323579 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:16Z","lastTransitionTime":"2026-02-19T00:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.427106 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.427183 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.427205 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.427236 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.427262 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:16Z","lastTransitionTime":"2026-02-19T00:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.531283 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.531327 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.531338 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.531356 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.531370 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:16Z","lastTransitionTime":"2026-02-19T00:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.634567 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.634619 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.634630 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.634650 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.634663 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:16Z","lastTransitionTime":"2026-02-19T00:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.738409 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.738497 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.738548 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.738583 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.738608 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:16Z","lastTransitionTime":"2026-02-19T00:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.843187 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.843263 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.843286 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.843317 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.843340 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:16Z","lastTransitionTime":"2026-02-19T00:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.947338 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.947404 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.947422 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.947447 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:16 crc kubenswrapper[4825]: I0219 00:09:16.947463 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:16Z","lastTransitionTime":"2026-02-19T00:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.051866 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.051906 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.051934 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.051955 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.051968 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:17Z","lastTransitionTime":"2026-02-19T00:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.062440 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:27:48.942896218 +0000 UTC Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.066009 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:17 crc kubenswrapper[4825]: E0219 00:09:17.066183 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.066226 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.066197 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:17 crc kubenswrapper[4825]: E0219 00:09:17.066441 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:17 crc kubenswrapper[4825]: E0219 00:09:17.066570 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.156060 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.156111 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.156124 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.156149 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.156163 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:17Z","lastTransitionTime":"2026-02-19T00:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.259810 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.259908 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.259931 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.259963 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.259982 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:17Z","lastTransitionTime":"2026-02-19T00:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.362586 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.362635 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.362652 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.362669 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.362679 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:17Z","lastTransitionTime":"2026-02-19T00:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.466497 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.466607 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.466627 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.466661 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.466685 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:17Z","lastTransitionTime":"2026-02-19T00:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.570259 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.570365 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.570388 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.570418 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.570436 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:17Z","lastTransitionTime":"2026-02-19T00:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.674212 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.674287 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.674312 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.674347 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.674449 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:17Z","lastTransitionTime":"2026-02-19T00:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.778560 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.778746 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.778779 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.778983 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.779066 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:17Z","lastTransitionTime":"2026-02-19T00:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.883185 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.883257 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.883276 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.883310 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.883335 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:17Z","lastTransitionTime":"2026-02-19T00:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.986215 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.986254 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.986265 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.986284 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:17 crc kubenswrapper[4825]: I0219 00:09:17.986298 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:17Z","lastTransitionTime":"2026-02-19T00:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.063307 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 18:15:11.546089449 +0000 UTC Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.065633 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:18 crc kubenswrapper[4825]: E0219 00:09:18.065784 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.089427 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.089459 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.089471 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.089487 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.089498 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:18Z","lastTransitionTime":"2026-02-19T00:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.192289 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.192353 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.192368 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.192390 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.192405 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:18Z","lastTransitionTime":"2026-02-19T00:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.296271 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.296345 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.296369 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.296398 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.296421 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:18Z","lastTransitionTime":"2026-02-19T00:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.400444 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.400560 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.400578 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.400610 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.400629 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:18Z","lastTransitionTime":"2026-02-19T00:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.504380 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.504436 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.504457 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.504490 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.504559 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:18Z","lastTransitionTime":"2026-02-19T00:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.608630 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.608707 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.608725 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.608755 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.608777 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:18Z","lastTransitionTime":"2026-02-19T00:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.713309 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.713373 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.713394 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.713425 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.713447 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:18Z","lastTransitionTime":"2026-02-19T00:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.816719 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.816783 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.816804 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.816836 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.816862 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:18Z","lastTransitionTime":"2026-02-19T00:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.921497 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.921690 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.921712 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.921775 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:18 crc kubenswrapper[4825]: I0219 00:09:18.921796 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:18Z","lastTransitionTime":"2026-02-19T00:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.025995 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.026051 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.026070 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.026098 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.026120 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:19Z","lastTransitionTime":"2026-02-19T00:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.064270 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 14:37:27.268775009 +0000 UTC Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.065968 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.066054 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.065968 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:19 crc kubenswrapper[4825]: E0219 00:09:19.066234 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:19 crc kubenswrapper[4825]: E0219 00:09:19.066331 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:19 crc kubenswrapper[4825]: E0219 00:09:19.066562 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.129902 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.130040 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.130070 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.130140 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.130167 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:19Z","lastTransitionTime":"2026-02-19T00:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.244076 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.244159 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.244185 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.244224 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.244251 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:19Z","lastTransitionTime":"2026-02-19T00:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.348748 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.348842 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.348870 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.348902 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.348927 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:19Z","lastTransitionTime":"2026-02-19T00:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.453138 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.453205 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.453223 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.453251 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.453272 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:19Z","lastTransitionTime":"2026-02-19T00:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.557981 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.558080 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.558107 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.558143 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.558169 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:19Z","lastTransitionTime":"2026-02-19T00:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.662406 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.662487 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.662547 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.662570 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.662585 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:19Z","lastTransitionTime":"2026-02-19T00:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.765768 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.765848 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.765862 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.765880 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.765892 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:19Z","lastTransitionTime":"2026-02-19T00:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.876575 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.876674 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.876718 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.876743 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.876791 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:19Z","lastTransitionTime":"2026-02-19T00:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.981072 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.981182 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.981202 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.981267 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:19 crc kubenswrapper[4825]: I0219 00:09:19.981340 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:19Z","lastTransitionTime":"2026-02-19T00:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.065056 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 05:24:45.367156645 +0000 UTC Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.065272 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:20 crc kubenswrapper[4825]: E0219 00:09:20.065501 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.086960 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.087429 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.087576 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.087756 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.087858 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:20Z","lastTransitionTime":"2026-02-19T00:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.192572 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.193098 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.193250 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.193387 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.193575 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:20Z","lastTransitionTime":"2026-02-19T00:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.298275 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.298366 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.298391 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.298428 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.298452 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:20Z","lastTransitionTime":"2026-02-19T00:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.401853 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.401927 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.401954 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.401987 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.402013 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:20Z","lastTransitionTime":"2026-02-19T00:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.506153 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.506698 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.506895 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.507050 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.507198 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:20Z","lastTransitionTime":"2026-02-19T00:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.611034 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.611089 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.611108 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.611137 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.611155 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:20Z","lastTransitionTime":"2026-02-19T00:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.714836 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.715710 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.715899 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.716078 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.716225 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:20Z","lastTransitionTime":"2026-02-19T00:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.826237 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.826568 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.826597 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.826629 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.826652 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:20Z","lastTransitionTime":"2026-02-19T00:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.930995 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.931060 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.931080 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.931110 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:20 crc kubenswrapper[4825]: I0219 00:09:20.931133 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:20Z","lastTransitionTime":"2026-02-19T00:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.034474 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.034572 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.034593 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.034619 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.034639 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:21Z","lastTransitionTime":"2026-02-19T00:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.065438 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.065721 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:21 crc kubenswrapper[4825]: E0219 00:09:21.065747 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.065781 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.065787 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:29:01.896174837 +0000 UTC Feb 19 00:09:21 crc kubenswrapper[4825]: E0219 00:09:21.065837 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:21 crc kubenswrapper[4825]: E0219 00:09:21.065879 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.138279 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.138333 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.138346 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.138368 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.138387 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:21Z","lastTransitionTime":"2026-02-19T00:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.244049 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.244115 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.244135 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.244164 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.244189 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:21Z","lastTransitionTime":"2026-02-19T00:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.347881 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.347928 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.347943 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.347966 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.347983 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:21Z","lastTransitionTime":"2026-02-19T00:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.451539 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.451608 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.451625 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.451648 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.451666 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:21Z","lastTransitionTime":"2026-02-19T00:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.555063 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.555133 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.555153 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.555181 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.555204 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:21Z","lastTransitionTime":"2026-02-19T00:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.658780 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.658871 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.658886 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.658910 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.658924 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:21Z","lastTransitionTime":"2026-02-19T00:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.761642 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.761708 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.761722 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.761746 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.761761 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:21Z","lastTransitionTime":"2026-02-19T00:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.865064 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.865133 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.865174 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.865211 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.865235 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:21Z","lastTransitionTime":"2026-02-19T00:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.968847 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.968936 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.968956 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.968980 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:21 crc kubenswrapper[4825]: I0219 00:09:21.968997 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:21Z","lastTransitionTime":"2026-02-19T00:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.066000 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.065986 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:48:21.703292307 +0000 UTC Feb 19 00:09:22 crc kubenswrapper[4825]: E0219 00:09:22.066265 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.067763 4825 scope.go:117] "RemoveContainer" containerID="28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f" Feb 19 00:09:22 crc kubenswrapper[4825]: E0219 00:09:22.068164 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bdpln_openshift-ovn-kubernetes(0c24ef0e-b402-4585-a79a-6b98b9896f5a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.072606 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.072669 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.072716 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.072741 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.072760 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:22Z","lastTransitionTime":"2026-02-19T00:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.178765 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.178843 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.178863 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.178898 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.178932 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:22Z","lastTransitionTime":"2026-02-19T00:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.282535 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.283086 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.283274 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.283549 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.283785 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:22Z","lastTransitionTime":"2026-02-19T00:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.388405 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.388455 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.388473 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.388501 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.388554 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:22Z","lastTransitionTime":"2026-02-19T00:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.492413 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.492489 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.492540 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.492582 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.492609 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:22Z","lastTransitionTime":"2026-02-19T00:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.596097 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.596176 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.596201 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.596246 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.596274 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:22Z","lastTransitionTime":"2026-02-19T00:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.699579 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.699654 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.699674 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.699704 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.699725 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:22Z","lastTransitionTime":"2026-02-19T00:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.802865 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.803001 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.803020 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.803054 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.803080 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:22Z","lastTransitionTime":"2026-02-19T00:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.906839 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.906905 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.906923 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.906955 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:22 crc kubenswrapper[4825]: I0219 00:09:22.906973 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:22Z","lastTransitionTime":"2026-02-19T00:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.011335 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.011421 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.011447 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.011490 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.011549 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:23Z","lastTransitionTime":"2026-02-19T00:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.065667 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.065715 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.065667 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:23 crc kubenswrapper[4825]: E0219 00:09:23.065888 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:23 crc kubenswrapper[4825]: E0219 00:09:23.066030 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:23 crc kubenswrapper[4825]: E0219 00:09:23.066286 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.066309 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:08:59.093068634 +0000 UTC Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.114954 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.115022 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.115043 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.115072 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.115093 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:23Z","lastTransitionTime":"2026-02-19T00:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.218261 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.218410 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.218463 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.218490 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.218536 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:23Z","lastTransitionTime":"2026-02-19T00:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.322751 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.322822 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.322845 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.322875 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.322894 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:23Z","lastTransitionTime":"2026-02-19T00:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.425913 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.426292 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.426471 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.426623 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.426747 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:23Z","lastTransitionTime":"2026-02-19T00:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.530808 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.531481 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.531600 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.531691 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.531813 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:23Z","lastTransitionTime":"2026-02-19T00:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.635634 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.635700 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.635715 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.635738 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.635753 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:23Z","lastTransitionTime":"2026-02-19T00:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.738761 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.739295 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.739465 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.739728 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.739936 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:23Z","lastTransitionTime":"2026-02-19T00:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.845147 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.845243 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.845268 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.845307 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.845334 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:23Z","lastTransitionTime":"2026-02-19T00:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.925470 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.925576 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.925599 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.925629 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.925649 4825 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:09:23Z","lastTransitionTime":"2026-02-19T00:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.998164 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk"] Feb 19 00:09:23 crc kubenswrapper[4825]: I0219 00:09:23.998676 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.002657 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.003228 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.003495 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.002243 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.006318 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6269d2bf-f08e-4f90-a0eb-48857ceda4b3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hjfrk\" (UID: \"6269d2bf-f08e-4f90-a0eb-48857ceda4b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.006550 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6269d2bf-f08e-4f90-a0eb-48857ceda4b3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hjfrk\" (UID: \"6269d2bf-f08e-4f90-a0eb-48857ceda4b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.007815 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6269d2bf-f08e-4f90-a0eb-48857ceda4b3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hjfrk\" (UID: \"6269d2bf-f08e-4f90-a0eb-48857ceda4b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.007907 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6269d2bf-f08e-4f90-a0eb-48857ceda4b3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hjfrk\" (UID: \"6269d2bf-f08e-4f90-a0eb-48857ceda4b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.007949 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6269d2bf-f08e-4f90-a0eb-48857ceda4b3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hjfrk\" (UID: \"6269d2bf-f08e-4f90-a0eb-48857ceda4b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.065591 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:24 crc kubenswrapper[4825]: E0219 00:09:24.065779 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.105275 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 03:18:48.833631646 +0000 UTC Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.105339 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.109227 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6269d2bf-f08e-4f90-a0eb-48857ceda4b3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hjfrk\" (UID: \"6269d2bf-f08e-4f90-a0eb-48857ceda4b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.109327 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6269d2bf-f08e-4f90-a0eb-48857ceda4b3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hjfrk\" (UID: \"6269d2bf-f08e-4f90-a0eb-48857ceda4b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.109375 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6269d2bf-f08e-4f90-a0eb-48857ceda4b3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hjfrk\" (UID: \"6269d2bf-f08e-4f90-a0eb-48857ceda4b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.109393 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6269d2bf-f08e-4f90-a0eb-48857ceda4b3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hjfrk\" (UID: \"6269d2bf-f08e-4f90-a0eb-48857ceda4b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.109459 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6269d2bf-f08e-4f90-a0eb-48857ceda4b3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hjfrk\" (UID: \"6269d2bf-f08e-4f90-a0eb-48857ceda4b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.109546 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6269d2bf-f08e-4f90-a0eb-48857ceda4b3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hjfrk\" (UID: \"6269d2bf-f08e-4f90-a0eb-48857ceda4b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.109586 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6269d2bf-f08e-4f90-a0eb-48857ceda4b3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hjfrk\" (UID: \"6269d2bf-f08e-4f90-a0eb-48857ceda4b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.111685 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6269d2bf-f08e-4f90-a0eb-48857ceda4b3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hjfrk\" (UID: \"6269d2bf-f08e-4f90-a0eb-48857ceda4b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.115677 4825 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.122452 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6269d2bf-f08e-4f90-a0eb-48857ceda4b3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hjfrk\" (UID: \"6269d2bf-f08e-4f90-a0eb-48857ceda4b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.139049 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6269d2bf-f08e-4f90-a0eb-48857ceda4b3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hjfrk\" (UID: \"6269d2bf-f08e-4f90-a0eb-48857ceda4b3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.324814 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.802627 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" event={"ID":"6269d2bf-f08e-4f90-a0eb-48857ceda4b3","Type":"ContainerStarted","Data":"d216aef2249ff0dd329cd8223c3226bd7653d081195fba64100a38b4c0664d37"} Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.803356 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" event={"ID":"6269d2bf-f08e-4f90-a0eb-48857ceda4b3","Type":"ContainerStarted","Data":"37b99e310c583a90ced0d27c66d5263d9ebdd6dccc31d6c24787e44446f9dd4e"} Feb 19 00:09:24 crc kubenswrapper[4825]: I0219 00:09:24.824157 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hjfrk" podStartSLOduration=88.824130742 podStartE2EDuration="1m28.824130742s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:24.823490175 +0000 UTC m=+110.514456282" watchObservedRunningTime="2026-02-19 00:09:24.824130742 +0000 UTC m=+110.515096829" Feb 19 00:09:25 crc kubenswrapper[4825]: I0219 00:09:25.066060 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:25 crc kubenswrapper[4825]: I0219 00:09:25.066115 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:25 crc kubenswrapper[4825]: I0219 00:09:25.066060 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:25 crc kubenswrapper[4825]: E0219 00:09:25.068387 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:25 crc kubenswrapper[4825]: E0219 00:09:25.068547 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:25 crc kubenswrapper[4825]: E0219 00:09:25.068698 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:26 crc kubenswrapper[4825]: I0219 00:09:26.066018 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:26 crc kubenswrapper[4825]: E0219 00:09:26.066246 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:27 crc kubenswrapper[4825]: I0219 00:09:27.065403 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:27 crc kubenswrapper[4825]: I0219 00:09:27.065484 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:27 crc kubenswrapper[4825]: E0219 00:09:27.065642 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:27 crc kubenswrapper[4825]: I0219 00:09:27.065403 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:27 crc kubenswrapper[4825]: E0219 00:09:27.066071 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:27 crc kubenswrapper[4825]: E0219 00:09:27.065917 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:28 crc kubenswrapper[4825]: I0219 00:09:28.065290 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:28 crc kubenswrapper[4825]: E0219 00:09:28.065585 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:29 crc kubenswrapper[4825]: I0219 00:09:29.065887 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:29 crc kubenswrapper[4825]: E0219 00:09:29.066103 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:29 crc kubenswrapper[4825]: I0219 00:09:29.066482 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:29 crc kubenswrapper[4825]: E0219 00:09:29.066618 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:29 crc kubenswrapper[4825]: I0219 00:09:29.066763 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:29 crc kubenswrapper[4825]: E0219 00:09:29.066972 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:30 crc kubenswrapper[4825]: I0219 00:09:30.065917 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:30 crc kubenswrapper[4825]: E0219 00:09:30.066168 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:30 crc kubenswrapper[4825]: I0219 00:09:30.828675 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfx7x_2daa6777-c1b1-4fae-9c14-cfe10867288a/kube-multus/1.log" Feb 19 00:09:30 crc kubenswrapper[4825]: I0219 00:09:30.829487 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfx7x_2daa6777-c1b1-4fae-9c14-cfe10867288a/kube-multus/0.log" Feb 19 00:09:30 crc kubenswrapper[4825]: I0219 00:09:30.829637 4825 generic.go:334] "Generic (PLEG): container finished" podID="2daa6777-c1b1-4fae-9c14-cfe10867288a" containerID="1e9505df7615b5027220ed25ee309b0a066503c64d3f8f45ef3fc23de1af2ac4" exitCode=1 Feb 19 00:09:30 crc kubenswrapper[4825]: I0219 00:09:30.829697 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zfx7x" event={"ID":"2daa6777-c1b1-4fae-9c14-cfe10867288a","Type":"ContainerDied","Data":"1e9505df7615b5027220ed25ee309b0a066503c64d3f8f45ef3fc23de1af2ac4"} Feb 19 00:09:30 crc kubenswrapper[4825]: I0219 00:09:30.829769 4825 scope.go:117] "RemoveContainer" containerID="5f50118566e9762654dc7103cb4404524d021b248f48f4e16e5aeda9a943389a" Feb 19 00:09:30 crc kubenswrapper[4825]: I0219 00:09:30.830576 4825 scope.go:117] "RemoveContainer" containerID="1e9505df7615b5027220ed25ee309b0a066503c64d3f8f45ef3fc23de1af2ac4" Feb 19 00:09:30 crc kubenswrapper[4825]: E0219 00:09:30.830929 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-zfx7x_openshift-multus(2daa6777-c1b1-4fae-9c14-cfe10867288a)\"" pod="openshift-multus/multus-zfx7x" podUID="2daa6777-c1b1-4fae-9c14-cfe10867288a" Feb 19 00:09:31 crc kubenswrapper[4825]: I0219 00:09:31.065762 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:31 crc kubenswrapper[4825]: I0219 00:09:31.065875 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:31 crc kubenswrapper[4825]: I0219 00:09:31.065996 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:31 crc kubenswrapper[4825]: E0219 00:09:31.066211 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:31 crc kubenswrapper[4825]: E0219 00:09:31.066414 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:31 crc kubenswrapper[4825]: E0219 00:09:31.066631 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:31 crc kubenswrapper[4825]: I0219 00:09:31.835908 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfx7x_2daa6777-c1b1-4fae-9c14-cfe10867288a/kube-multus/1.log" Feb 19 00:09:32 crc kubenswrapper[4825]: I0219 00:09:32.065968 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:32 crc kubenswrapper[4825]: E0219 00:09:32.066185 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:33 crc kubenswrapper[4825]: I0219 00:09:33.065799 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:33 crc kubenswrapper[4825]: I0219 00:09:33.065982 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:33 crc kubenswrapper[4825]: I0219 00:09:33.066410 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:33 crc kubenswrapper[4825]: E0219 00:09:33.066391 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:33 crc kubenswrapper[4825]: E0219 00:09:33.066669 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:33 crc kubenswrapper[4825]: E0219 00:09:33.066911 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:33 crc kubenswrapper[4825]: I0219 00:09:33.068344 4825 scope.go:117] "RemoveContainer" containerID="28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f" Feb 19 00:09:33 crc kubenswrapper[4825]: I0219 00:09:33.846479 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovnkube-controller/3.log" Feb 19 00:09:33 crc kubenswrapper[4825]: I0219 00:09:33.850856 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerStarted","Data":"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11"} Feb 19 00:09:33 crc kubenswrapper[4825]: I0219 00:09:33.851309 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:09:33 crc kubenswrapper[4825]: I0219 00:09:33.881125 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podStartSLOduration=97.881100967 podStartE2EDuration="1m37.881100967s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:33.879814492 +0000 UTC m=+119.570780549" watchObservedRunningTime="2026-02-19 00:09:33.881100967 +0000 UTC m=+119.572067014" Feb 19 00:09:34 crc kubenswrapper[4825]: I0219 00:09:34.065888 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:34 crc kubenswrapper[4825]: E0219 00:09:34.066227 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:34 crc kubenswrapper[4825]: I0219 00:09:34.219983 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bhnmw"] Feb 19 00:09:34 crc kubenswrapper[4825]: I0219 00:09:34.855015 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:34 crc kubenswrapper[4825]: E0219 00:09:34.855243 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:35 crc kubenswrapper[4825]: I0219 00:09:35.065717 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:35 crc kubenswrapper[4825]: I0219 00:09:35.066932 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:35 crc kubenswrapper[4825]: E0219 00:09:35.066929 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:35 crc kubenswrapper[4825]: I0219 00:09:35.066971 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:35 crc kubenswrapper[4825]: E0219 00:09:35.067054 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:35 crc kubenswrapper[4825]: E0219 00:09:35.067233 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:35 crc kubenswrapper[4825]: E0219 00:09:35.085093 4825 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 19 00:09:35 crc kubenswrapper[4825]: E0219 00:09:35.166752 4825 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 00:09:36 crc kubenswrapper[4825]: I0219 00:09:36.065702 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:36 crc kubenswrapper[4825]: E0219 00:09:36.065886 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:37 crc kubenswrapper[4825]: I0219 00:09:37.064939 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:37 crc kubenswrapper[4825]: I0219 00:09:37.064968 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:37 crc kubenswrapper[4825]: E0219 00:09:37.065180 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:37 crc kubenswrapper[4825]: I0219 00:09:37.065087 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:37 crc kubenswrapper[4825]: E0219 00:09:37.065288 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:37 crc kubenswrapper[4825]: E0219 00:09:37.065332 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:38 crc kubenswrapper[4825]: I0219 00:09:38.066000 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:38 crc kubenswrapper[4825]: E0219 00:09:38.066276 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:39 crc kubenswrapper[4825]: I0219 00:09:39.065595 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:39 crc kubenswrapper[4825]: I0219 00:09:39.065706 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:39 crc kubenswrapper[4825]: I0219 00:09:39.065861 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:39 crc kubenswrapper[4825]: E0219 00:09:39.066641 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:39 crc kubenswrapper[4825]: E0219 00:09:39.066794 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:39 crc kubenswrapper[4825]: E0219 00:09:39.067226 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:40 crc kubenswrapper[4825]: I0219 00:09:40.065438 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:40 crc kubenswrapper[4825]: E0219 00:09:40.065731 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:40 crc kubenswrapper[4825]: E0219 00:09:40.169564 4825 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 00:09:41 crc kubenswrapper[4825]: I0219 00:09:41.065222 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:41 crc kubenswrapper[4825]: I0219 00:09:41.065281 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:41 crc kubenswrapper[4825]: E0219 00:09:41.065378 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:41 crc kubenswrapper[4825]: I0219 00:09:41.065495 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:41 crc kubenswrapper[4825]: E0219 00:09:41.065555 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:41 crc kubenswrapper[4825]: E0219 00:09:41.065749 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:42 crc kubenswrapper[4825]: I0219 00:09:42.065683 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:42 crc kubenswrapper[4825]: E0219 00:09:42.065808 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:43 crc kubenswrapper[4825]: I0219 00:09:43.065392 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:43 crc kubenswrapper[4825]: I0219 00:09:43.065441 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:43 crc kubenswrapper[4825]: I0219 00:09:43.065913 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:43 crc kubenswrapper[4825]: E0219 00:09:43.066044 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:43 crc kubenswrapper[4825]: E0219 00:09:43.066100 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:43 crc kubenswrapper[4825]: E0219 00:09:43.066655 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:44 crc kubenswrapper[4825]: I0219 00:09:44.065042 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:44 crc kubenswrapper[4825]: I0219 00:09:44.065427 4825 scope.go:117] "RemoveContainer" containerID="1e9505df7615b5027220ed25ee309b0a066503c64d3f8f45ef3fc23de1af2ac4" Feb 19 00:09:44 crc kubenswrapper[4825]: E0219 00:09:44.066379 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:44 crc kubenswrapper[4825]: I0219 00:09:44.894091 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfx7x_2daa6777-c1b1-4fae-9c14-cfe10867288a/kube-multus/1.log" Feb 19 00:09:44 crc kubenswrapper[4825]: I0219 00:09:44.894492 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zfx7x" event={"ID":"2daa6777-c1b1-4fae-9c14-cfe10867288a","Type":"ContainerStarted","Data":"e59b570df152a7fa6610b67dc946a6c9ad47eb9cb82e546c6406b9a5982d6f99"} Feb 19 00:09:45 crc kubenswrapper[4825]: I0219 00:09:45.069756 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:45 crc kubenswrapper[4825]: E0219 00:09:45.069893 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:45 crc kubenswrapper[4825]: I0219 00:09:45.069959 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:45 crc kubenswrapper[4825]: E0219 00:09:45.070015 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:45 crc kubenswrapper[4825]: I0219 00:09:45.070055 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:45 crc kubenswrapper[4825]: E0219 00:09:45.070109 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:45 crc kubenswrapper[4825]: E0219 00:09:45.170412 4825 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 00:09:46 crc kubenswrapper[4825]: I0219 00:09:46.065777 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:46 crc kubenswrapper[4825]: E0219 00:09:46.065994 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:47 crc kubenswrapper[4825]: I0219 00:09:47.065771 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:47 crc kubenswrapper[4825]: I0219 00:09:47.065864 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:47 crc kubenswrapper[4825]: E0219 00:09:47.065998 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:47 crc kubenswrapper[4825]: E0219 00:09:47.066464 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:47 crc kubenswrapper[4825]: I0219 00:09:47.066916 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:47 crc kubenswrapper[4825]: E0219 00:09:47.067148 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:48 crc kubenswrapper[4825]: I0219 00:09:48.065171 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:48 crc kubenswrapper[4825]: E0219 00:09:48.065376 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:48 crc kubenswrapper[4825]: I0219 00:09:48.851930 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:09:49 crc kubenswrapper[4825]: I0219 00:09:49.065433 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:49 crc kubenswrapper[4825]: I0219 00:09:49.065544 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:49 crc kubenswrapper[4825]: E0219 00:09:49.065593 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:09:49 crc kubenswrapper[4825]: I0219 00:09:49.065732 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:49 crc kubenswrapper[4825]: E0219 00:09:49.065825 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:09:49 crc kubenswrapper[4825]: E0219 00:09:49.066012 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:09:50 crc kubenswrapper[4825]: I0219 00:09:50.066011 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:50 crc kubenswrapper[4825]: E0219 00:09:50.066247 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bhnmw" podUID="80aa664d-e111-41f6-815d-f4185e1f72ff" Feb 19 00:09:51 crc kubenswrapper[4825]: I0219 00:09:51.065567 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:09:51 crc kubenswrapper[4825]: I0219 00:09:51.065569 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:09:51 crc kubenswrapper[4825]: I0219 00:09:51.065567 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:51 crc kubenswrapper[4825]: I0219 00:09:51.068108 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 00:09:51 crc kubenswrapper[4825]: I0219 00:09:51.068527 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 00:09:51 crc kubenswrapper[4825]: I0219 00:09:51.068925 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 00:09:51 crc kubenswrapper[4825]: I0219 00:09:51.070949 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 00:09:52 crc kubenswrapper[4825]: I0219 00:09:52.065792 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:09:52 crc kubenswrapper[4825]: I0219 00:09:52.068637 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 00:09:52 crc kubenswrapper[4825]: I0219 00:09:52.069310 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.542324 4825 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.594239 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bwrjg"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.594769 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.595813 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.596408 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.602013 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.602538 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.602733 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29524320-khn5f"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.602968 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29524320-khn5f" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.622422 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.622766 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.623586 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.624079 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.624724 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.625935 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bf56f"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.626460 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bf56f" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.627533 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.627576 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.627624 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.627658 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.627695 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.627749 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.632397 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.632730 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-88d6d"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.633541 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-88d6d" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.635834 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.641882 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.652452 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf1c5562-2a73-410f-b7b3-fe0edab3216b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pfhx8\" (UID: \"bf1c5562-2a73-410f-b7b3-fe0edab3216b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.652593 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jplj\" (UniqueName: \"kubernetes.io/projected/f110da82-46c4-44d5-91f6-195be763d96f-kube-api-access-7jplj\") pod \"controller-manager-879f6c89f-bwrjg\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.652638 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsrns\" (UniqueName: \"kubernetes.io/projected/7e0cdf1c-faf9-4a21-8beb-1b712bd266fc-kube-api-access-xsrns\") pod \"image-pruner-29524320-khn5f\" (UID: \"7e0cdf1c-faf9-4a21-8beb-1b712bd266fc\") " pod="openshift-image-registry/image-pruner-29524320-khn5f" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.652697 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/864d9174-e828-4f4e-a143-bc3491f42aef-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-88d6d\" (UID: \"864d9174-e828-4f4e-a143-bc3491f42aef\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-88d6d" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.652749 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7e0cdf1c-faf9-4a21-8beb-1b712bd266fc-serviceca\") pod \"image-pruner-29524320-khn5f\" (UID: \"7e0cdf1c-faf9-4a21-8beb-1b712bd266fc\") " pod="openshift-image-registry/image-pruner-29524320-khn5f" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.652780 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f110da82-46c4-44d5-91f6-195be763d96f-serving-cert\") pod \"controller-manager-879f6c89f-bwrjg\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.652804 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400b2fc6-03e6-48b5-9424-67ac6c34cfb1-config\") pod \"machine-approver-56656f9798-r7p5c\" (UID: \"400b2fc6-03e6-48b5-9424-67ac6c34cfb1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.652828 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-client-ca\") pod \"controller-manager-879f6c89f-bwrjg\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.652853 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlxwm\" (UniqueName: \"kubernetes.io/projected/400b2fc6-03e6-48b5-9424-67ac6c34cfb1-kube-api-access-zlxwm\") pod \"machine-approver-56656f9798-r7p5c\" (UID: \"400b2fc6-03e6-48b5-9424-67ac6c34cfb1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.652873 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/400b2fc6-03e6-48b5-9424-67ac6c34cfb1-auth-proxy-config\") pod \"machine-approver-56656f9798-r7p5c\" (UID: \"400b2fc6-03e6-48b5-9424-67ac6c34cfb1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.652881 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.652895 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f47vl\" (UniqueName: \"kubernetes.io/projected/bf1c5562-2a73-410f-b7b3-fe0edab3216b-kube-api-access-f47vl\") pod \"cluster-image-registry-operator-dc59b4c8b-pfhx8\" (UID: \"bf1c5562-2a73-410f-b7b3-fe0edab3216b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.652900 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.652938 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfmzb\" (UniqueName: \"kubernetes.io/projected/29be03fe-da22-41a7-9243-67aa815fbfb1-kube-api-access-pfmzb\") pod \"downloads-7954f5f757-bf56f\" (UID: \"29be03fe-da22-41a7-9243-67aa815fbfb1\") " pod="openshift-console/downloads-7954f5f757-bf56f" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.652970 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf1c5562-2a73-410f-b7b3-fe0edab3216b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pfhx8\" (UID: \"bf1c5562-2a73-410f-b7b3-fe0edab3216b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.652998 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/400b2fc6-03e6-48b5-9424-67ac6c34cfb1-machine-approver-tls\") pod \"machine-approver-56656f9798-r7p5c\" (UID: \"400b2fc6-03e6-48b5-9424-67ac6c34cfb1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.653038 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsz7c\" (UniqueName: \"kubernetes.io/projected/864d9174-e828-4f4e-a143-bc3491f42aef-kube-api-access-qsz7c\") pod \"cluster-samples-operator-665b6dd947-88d6d\" (UID: \"864d9174-e828-4f4e-a143-bc3491f42aef\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-88d6d" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.653065 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf1c5562-2a73-410f-b7b3-fe0edab3216b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pfhx8\" (UID: \"bf1c5562-2a73-410f-b7b3-fe0edab3216b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.653092 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-config\") pod \"controller-manager-879f6c89f-bwrjg\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.653129 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bwrjg\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.654021 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.654770 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.655226 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.655356 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-w6fd4"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.655798 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.655910 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.661183 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.663185 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.665009 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-92549"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.665986 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.668742 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-czdg9"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.670185 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-czdg9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.670792 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.674667 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rm422"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.676067 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.679257 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.679805 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.682918 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9p52n"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.740535 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-k9t86"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.740903 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.741279 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.741301 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.742177 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.744094 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.744142 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.744230 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.744252 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.744293 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.744373 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.744408 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.744466 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.744535 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.744623 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.744744 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.744986 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.747118 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.747529 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.747558 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.747689 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.748078 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.748636 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.748792 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.748802 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.749192 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.749230 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.749318 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.749409 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.749455 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.749645 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.749903 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.750442 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.751252 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.751369 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.751400 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.751519 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.751559 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.751696 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.751834 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.751874 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.752494 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.752650 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.752780 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.752916 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.753003 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.753577 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.753582 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-audit-policies\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.753698 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc9rk\" (UniqueName: \"kubernetes.io/projected/3da90cf7-acf9-4fa5-8a59-1f444dd5a619-kube-api-access-kc9rk\") pod \"openshift-apiserver-operator-796bbdcf4f-9lm2r\" (UID: \"3da90cf7-acf9-4fa5-8a59-1f444dd5a619\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.753731 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b7d82a4a-3947-4645-982a-654a8101ba55-node-pullsecrets\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.753771 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b7d82a4a-3947-4645-982a-654a8101ba55-image-import-ca\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.753807 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-client-ca\") pod \"controller-manager-879f6c89f-bwrjg\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.753840 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.753879 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ztk7\" (UniqueName: \"kubernetes.io/projected/fff3e916-b71f-44c3-a4ac-a78efc547a28-kube-api-access-5ztk7\") pod \"route-controller-manager-6576b87f9c-96ctn\" (UID: \"fff3e916-b71f-44c3-a4ac-a78efc547a28\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.753922 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlxwm\" (UniqueName: \"kubernetes.io/projected/400b2fc6-03e6-48b5-9424-67ac6c34cfb1-kube-api-access-zlxwm\") pod \"machine-approver-56656f9798-r7p5c\" (UID: \"400b2fc6-03e6-48b5-9424-67ac6c34cfb1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.753954 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7d82a4a-3947-4645-982a-654a8101ba55-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.753989 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/400b2fc6-03e6-48b5-9424-67ac6c34cfb1-auth-proxy-config\") pod \"machine-approver-56656f9798-r7p5c\" (UID: \"400b2fc6-03e6-48b5-9424-67ac6c34cfb1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.754024 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f47vl\" (UniqueName: \"kubernetes.io/projected/bf1c5562-2a73-410f-b7b3-fe0edab3216b-kube-api-access-f47vl\") pod \"cluster-image-registry-operator-dc59b4c8b-pfhx8\" (UID: \"bf1c5562-2a73-410f-b7b3-fe0edab3216b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.754058 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vnph\" (UniqueName: \"kubernetes.io/projected/690f4dfb-2e2f-4419-a331-3a26e0dac535-kube-api-access-4vnph\") pod \"authentication-operator-69f744f599-k9t86\" (UID: \"690f4dfb-2e2f-4419-a331-3a26e0dac535\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.754091 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff3e916-b71f-44c3-a4ac-a78efc547a28-config\") pod \"route-controller-manager-6576b87f9c-96ctn\" (UID: \"fff3e916-b71f-44c3-a4ac-a78efc547a28\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.754132 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfmzb\" (UniqueName: \"kubernetes.io/projected/29be03fe-da22-41a7-9243-67aa815fbfb1-kube-api-access-pfmzb\") pod \"downloads-7954f5f757-bf56f\" (UID: \"29be03fe-da22-41a7-9243-67aa815fbfb1\") " pod="openshift-console/downloads-7954f5f757-bf56f" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.754167 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-service-ca\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.754199 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-oauth-serving-cert\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.754236 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf1c5562-2a73-410f-b7b3-fe0edab3216b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pfhx8\" (UID: \"bf1c5562-2a73-410f-b7b3-fe0edab3216b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.754270 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7d82a4a-3947-4645-982a-654a8101ba55-audit-dir\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.754308 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-console-serving-cert\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.754344 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jwwz\" (UniqueName: \"kubernetes.io/projected/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-kube-api-access-8jwwz\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.754373 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86321d30-1e05-467f-bdac-35cefcfdd789-config\") pod \"console-operator-58897d9998-czdg9\" (UID: \"86321d30-1e05-467f-bdac-35cefcfdd789\") " pod="openshift-console-operator/console-operator-58897d9998-czdg9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.754405 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/999a793b-4d2e-41bb-bd09-cd8ca31cef0c-config\") pod \"machine-api-operator-5694c8668f-9p52n\" (UID: \"999a793b-4d2e-41bb-bd09-cd8ca31cef0c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.754438 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.754471 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.754502 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.756399 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-client-ca\") pod \"controller-manager-879f6c89f-bwrjg\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.757094 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/400b2fc6-03e6-48b5-9424-67ac6c34cfb1-auth-proxy-config\") pod \"machine-approver-56656f9798-r7p5c\" (UID: \"400b2fc6-03e6-48b5-9424-67ac6c34cfb1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.757459 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.758011 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.758079 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.758164 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.758177 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.758249 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.758263 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.758332 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.758415 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.758494 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.758601 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.758679 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.759000 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.759633 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/400b2fc6-03e6-48b5-9424-67ac6c34cfb1-machine-approver-tls\") pod \"machine-approver-56656f9798-r7p5c\" (UID: \"400b2fc6-03e6-48b5-9424-67ac6c34cfb1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.759696 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/690f4dfb-2e2f-4419-a331-3a26e0dac535-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-k9t86\" (UID: \"690f4dfb-2e2f-4419-a331-3a26e0dac535\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.759729 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/690f4dfb-2e2f-4419-a331-3a26e0dac535-service-ca-bundle\") pod \"authentication-operator-69f744f599-k9t86\" (UID: \"690f4dfb-2e2f-4419-a331-3a26e0dac535\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.759764 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.759829 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsz7c\" (UniqueName: \"kubernetes.io/projected/864d9174-e828-4f4e-a143-bc3491f42aef-kube-api-access-qsz7c\") pod \"cluster-samples-operator-665b6dd947-88d6d\" (UID: \"864d9174-e828-4f4e-a143-bc3491f42aef\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-88d6d" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.759967 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690f4dfb-2e2f-4419-a331-3a26e0dac535-config\") pod \"authentication-operator-69f744f599-k9t86\" (UID: \"690f4dfb-2e2f-4419-a331-3a26e0dac535\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760058 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf1c5562-2a73-410f-b7b3-fe0edab3216b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pfhx8\" (UID: \"bf1c5562-2a73-410f-b7b3-fe0edab3216b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760108 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760138 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760188 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-config\") pod \"controller-manager-879f6c89f-bwrjg\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760215 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b7d82a4a-3947-4645-982a-654a8101ba55-encryption-config\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760240 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bwrjg\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760286 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-console-oauth-config\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760325 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf1c5562-2a73-410f-b7b3-fe0edab3216b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pfhx8\" (UID: \"bf1c5562-2a73-410f-b7b3-fe0edab3216b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760374 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jplj\" (UniqueName: \"kubernetes.io/projected/f110da82-46c4-44d5-91f6-195be763d96f-kube-api-access-7jplj\") pod \"controller-manager-879f6c89f-bwrjg\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760399 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7d82a4a-3947-4645-982a-654a8101ba55-serving-cert\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760448 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5dcc\" (UniqueName: \"kubernetes.io/projected/86321d30-1e05-467f-bdac-35cefcfdd789-kube-api-access-v5dcc\") pod \"console-operator-58897d9998-czdg9\" (UID: \"86321d30-1e05-467f-bdac-35cefcfdd789\") " pod="openshift-console-operator/console-operator-58897d9998-czdg9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760477 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fff3e916-b71f-44c3-a4ac-a78efc547a28-serving-cert\") pod \"route-controller-manager-6576b87f9c-96ctn\" (UID: \"fff3e916-b71f-44c3-a4ac-a78efc547a28\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760540 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsrns\" (UniqueName: \"kubernetes.io/projected/7e0cdf1c-faf9-4a21-8beb-1b712bd266fc-kube-api-access-xsrns\") pod \"image-pruner-29524320-khn5f\" (UID: \"7e0cdf1c-faf9-4a21-8beb-1b712bd266fc\") " pod="openshift-image-registry/image-pruner-29524320-khn5f" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760568 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da90cf7-acf9-4fa5-8a59-1f444dd5a619-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9lm2r\" (UID: \"3da90cf7-acf9-4fa5-8a59-1f444dd5a619\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760617 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-console-config\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760687 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/690f4dfb-2e2f-4419-a331-3a26e0dac535-serving-cert\") pod \"authentication-operator-69f744f599-k9t86\" (UID: \"690f4dfb-2e2f-4419-a331-3a26e0dac535\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760724 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhsjj\" (UniqueName: \"kubernetes.io/projected/b7d82a4a-3947-4645-982a-654a8101ba55-kube-api-access-dhsjj\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760774 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760816 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fff3e916-b71f-44c3-a4ac-a78efc547a28-client-ca\") pod \"route-controller-manager-6576b87f9c-96ctn\" (UID: \"fff3e916-b71f-44c3-a4ac-a78efc547a28\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760879 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/864d9174-e828-4f4e-a143-bc3491f42aef-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-88d6d\" (UID: \"864d9174-e828-4f4e-a143-bc3491f42aef\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-88d6d" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.760905 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86321d30-1e05-467f-bdac-35cefcfdd789-serving-cert\") pod \"console-operator-58897d9998-czdg9\" (UID: \"86321d30-1e05-467f-bdac-35cefcfdd789\") " pod="openshift-console-operator/console-operator-58897d9998-czdg9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.761137 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-trusted-ca-bundle\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.761172 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-audit-dir\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.761223 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.761290 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b7d82a4a-3947-4645-982a-654a8101ba55-audit\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.761323 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.763431 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-config\") pod \"controller-manager-879f6c89f-bwrjg\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.759651 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.761195 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.761235 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.761929 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.765128 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.762396 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.768491 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.769091 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.771351 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.772686 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/999a793b-4d2e-41bb-bd09-cd8ca31cef0c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9p52n\" (UID: \"999a793b-4d2e-41bb-bd09-cd8ca31cef0c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.772761 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr2x4\" (UniqueName: \"kubernetes.io/projected/999a793b-4d2e-41bb-bd09-cd8ca31cef0c-kube-api-access-rr2x4\") pod \"machine-api-operator-5694c8668f-9p52n\" (UID: \"999a793b-4d2e-41bb-bd09-cd8ca31cef0c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.772842 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b7d82a4a-3947-4645-982a-654a8101ba55-etcd-client\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.772880 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7e0cdf1c-faf9-4a21-8beb-1b712bd266fc-serviceca\") pod \"image-pruner-29524320-khn5f\" (UID: \"7e0cdf1c-faf9-4a21-8beb-1b712bd266fc\") " pod="openshift-image-registry/image-pruner-29524320-khn5f" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.769454 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.772928 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/999a793b-4d2e-41bb-bd09-cd8ca31cef0c-images\") pod \"machine-api-operator-5694c8668f-9p52n\" (UID: \"999a793b-4d2e-41bb-bd09-cd8ca31cef0c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.772957 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b7d82a4a-3947-4645-982a-654a8101ba55-etcd-serving-ca\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.773001 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d82a4a-3947-4645-982a-654a8101ba55-config\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.773027 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86321d30-1e05-467f-bdac-35cefcfdd789-trusted-ca\") pod \"console-operator-58897d9998-czdg9\" (UID: \"86321d30-1e05-467f-bdac-35cefcfdd789\") " pod="openshift-console-operator/console-operator-58897d9998-czdg9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.773100 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f110da82-46c4-44d5-91f6-195be763d96f-serving-cert\") pod \"controller-manager-879f6c89f-bwrjg\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.773155 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3da90cf7-acf9-4fa5-8a59-1f444dd5a619-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9lm2r\" (UID: \"3da90cf7-acf9-4fa5-8a59-1f444dd5a619\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.773187 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400b2fc6-03e6-48b5-9424-67ac6c34cfb1-config\") pod \"machine-approver-56656f9798-r7p5c\" (UID: \"400b2fc6-03e6-48b5-9424-67ac6c34cfb1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.773230 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtmnj\" (UniqueName: \"kubernetes.io/projected/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-kube-api-access-rtmnj\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.773259 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.769808 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.774240 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.774705 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7e0cdf1c-faf9-4a21-8beb-1b712bd266fc-serviceca\") pod \"image-pruner-29524320-khn5f\" (UID: \"7e0cdf1c-faf9-4a21-8beb-1b712bd266fc\") " pod="openshift-image-registry/image-pruner-29524320-khn5f" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.777365 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/400b2fc6-03e6-48b5-9424-67ac6c34cfb1-config\") pod \"machine-approver-56656f9798-r7p5c\" (UID: \"400b2fc6-03e6-48b5-9424-67ac6c34cfb1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.805220 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/400b2fc6-03e6-48b5-9424-67ac6c34cfb1-machine-approver-tls\") pod \"machine-approver-56656f9798-r7p5c\" (UID: \"400b2fc6-03e6-48b5-9424-67ac6c34cfb1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.805931 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf1c5562-2a73-410f-b7b3-fe0edab3216b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-pfhx8\" (UID: \"bf1c5562-2a73-410f-b7b3-fe0edab3216b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.806259 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bwrjg\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.807812 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sms5d"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.809849 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f110da82-46c4-44d5-91f6-195be763d96f-serving-cert\") pod \"controller-manager-879f6c89f-bwrjg\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.810129 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-lhzt4"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.811160 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.805389 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/864d9174-e828-4f4e-a143-bc3491f42aef-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-88d6d\" (UID: \"864d9174-e828-4f4e-a143-bc3491f42aef\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-88d6d" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.814014 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf1c5562-2a73-410f-b7b3-fe0edab3216b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-pfhx8\" (UID: \"bf1c5562-2a73-410f-b7b3-fe0edab3216b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.815388 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.816853 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jplj\" (UniqueName: \"kubernetes.io/projected/f110da82-46c4-44d5-91f6-195be763d96f-kube-api-access-7jplj\") pod \"controller-manager-879f6c89f-bwrjg\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.823354 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.823550 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.823354 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.823716 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dmlhp"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.823650 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.826211 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.832282 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.832810 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.832847 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.843931 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.847770 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.848011 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.848319 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsrns\" (UniqueName: \"kubernetes.io/projected/7e0cdf1c-faf9-4a21-8beb-1b712bd266fc-kube-api-access-xsrns\") pod \"image-pruner-29524320-khn5f\" (UID: \"7e0cdf1c-faf9-4a21-8beb-1b712bd266fc\") " pod="openshift-image-registry/image-pruner-29524320-khn5f" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.849024 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.849315 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.849603 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.850032 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.850266 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dmlhp" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.850411 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.850550 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.852016 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsz7c\" (UniqueName: \"kubernetes.io/projected/864d9174-e828-4f4e-a143-bc3491f42aef-kube-api-access-qsz7c\") pod \"cluster-samples-operator-665b6dd947-88d6d\" (UID: \"864d9174-e828-4f4e-a143-bc3491f42aef\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-88d6d" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.852631 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf1c5562-2a73-410f-b7b3-fe0edab3216b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-pfhx8\" (UID: \"bf1c5562-2a73-410f-b7b3-fe0edab3216b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.855414 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.856041 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfmzb\" (UniqueName: \"kubernetes.io/projected/29be03fe-da22-41a7-9243-67aa815fbfb1-kube-api-access-pfmzb\") pod \"downloads-7954f5f757-bf56f\" (UID: \"29be03fe-da22-41a7-9243-67aa815fbfb1\") " pod="openshift-console/downloads-7954f5f757-bf56f" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.856333 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.856481 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.856892 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.859024 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlxwm\" (UniqueName: \"kubernetes.io/projected/400b2fc6-03e6-48b5-9424-67ac6c34cfb1-kube-api-access-zlxwm\") pod \"machine-approver-56656f9798-r7p5c\" (UID: \"400b2fc6-03e6-48b5-9424-67ac6c34cfb1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.866026 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f47vl\" (UniqueName: \"kubernetes.io/projected/bf1c5562-2a73-410f-b7b3-fe0edab3216b-kube-api-access-f47vl\") pod \"cluster-image-registry-operator-dc59b4c8b-pfhx8\" (UID: \"bf1c5562-2a73-410f-b7b3-fe0edab3216b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.866823 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.868176 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.875069 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.875132 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/690f4dfb-2e2f-4419-a331-3a26e0dac535-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-k9t86\" (UID: \"690f4dfb-2e2f-4419-a331-3a26e0dac535\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.875161 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.875184 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.875214 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6e2361-7fb0-4e89-8747-ae1a46cb0e65-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-r9ggh\" (UID: \"3c6e2361-7fb0-4e89-8747-ae1a46cb0e65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.875238 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bda9acd8-7428-4ed2-aa1c-54c759b39e97-stats-auth\") pod \"router-default-5444994796-lhzt4\" (UID: \"bda9acd8-7428-4ed2-aa1c-54c759b39e97\") " pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.875266 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c6e2361-7fb0-4e89-8747-ae1a46cb0e65-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-r9ggh\" (UID: \"3c6e2361-7fb0-4e89-8747-ae1a46cb0e65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.875294 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/690f4dfb-2e2f-4419-a331-3a26e0dac535-service-ca-bundle\") pod \"authentication-operator-69f744f599-k9t86\" (UID: \"690f4dfb-2e2f-4419-a331-3a26e0dac535\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.875318 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.875347 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bda9acd8-7428-4ed2-aa1c-54c759b39e97-default-certificate\") pod \"router-default-5444994796-lhzt4\" (UID: \"bda9acd8-7428-4ed2-aa1c-54c759b39e97\") " pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.875377 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c142cb69-debd-49df-9c48-50cf5e5aa740-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-76cwz\" (UID: \"c142cb69-debd-49df-9c48-50cf5e5aa740\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.875413 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12aeb4bf-6d3b-4c0e-a121-20c286762e7b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ctmf9\" (UID: \"12aeb4bf-6d3b-4c0e-a121-20c286762e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.875443 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690f4dfb-2e2f-4419-a331-3a26e0dac535-config\") pod \"authentication-operator-69f744f599-k9t86\" (UID: \"690f4dfb-2e2f-4419-a331-3a26e0dac535\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.875477 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g47g\" (UniqueName: \"kubernetes.io/projected/bda9acd8-7428-4ed2-aa1c-54c759b39e97-kube-api-access-7g47g\") pod \"router-default-5444994796-lhzt4\" (UID: \"bda9acd8-7428-4ed2-aa1c-54c759b39e97\") " pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.875520 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.876494 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.876580 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac81eab9-9f57-48f0-a71d-0bc6087eb8d8-metrics-tls\") pod \"dns-operator-744455d44c-dmlhp\" (UID: \"ac81eab9-9f57-48f0-a71d-0bc6087eb8d8\") " pod="openshift-dns-operator/dns-operator-744455d44c-dmlhp" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.876657 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b7d82a4a-3947-4645-982a-654a8101ba55-encryption-config\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.876691 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-console-oauth-config\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.876722 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12aeb4bf-6d3b-4c0e-a121-20c286762e7b-config\") pod \"kube-controller-manager-operator-78b949d7b-ctmf9\" (UID: \"12aeb4bf-6d3b-4c0e-a121-20c286762e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.876758 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87lrx\" (UniqueName: \"kubernetes.io/projected/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-kube-api-access-87lrx\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.876790 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fff3e916-b71f-44c3-a4ac-a78efc547a28-serving-cert\") pod \"route-controller-manager-6576b87f9c-96ctn\" (UID: \"fff3e916-b71f-44c3-a4ac-a78efc547a28\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.876824 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da90cf7-acf9-4fa5-8a59-1f444dd5a619-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9lm2r\" (UID: \"3da90cf7-acf9-4fa5-8a59-1f444dd5a619\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.876853 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7d82a4a-3947-4645-982a-654a8101ba55-serving-cert\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.876881 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5dcc\" (UniqueName: \"kubernetes.io/projected/86321d30-1e05-467f-bdac-35cefcfdd789-kube-api-access-v5dcc\") pod \"console-operator-58897d9998-czdg9\" (UID: \"86321d30-1e05-467f-bdac-35cefcfdd789\") " pod="openshift-console-operator/console-operator-58897d9998-czdg9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.876914 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-console-config\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.876945 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-audit-dir\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.876970 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c142cb69-debd-49df-9c48-50cf5e5aa740-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-76cwz\" (UID: \"c142cb69-debd-49df-9c48-50cf5e5aa740\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.877018 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/690f4dfb-2e2f-4419-a331-3a26e0dac535-serving-cert\") pod \"authentication-operator-69f744f599-k9t86\" (UID: \"690f4dfb-2e2f-4419-a331-3a26e0dac535\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.877047 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12aeb4bf-6d3b-4c0e-a121-20c286762e7b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ctmf9\" (UID: \"12aeb4bf-6d3b-4c0e-a121-20c286762e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.877076 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fff3e916-b71f-44c3-a4ac-a78efc547a28-client-ca\") pod \"route-controller-manager-6576b87f9c-96ctn\" (UID: \"fff3e916-b71f-44c3-a4ac-a78efc547a28\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.877110 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhsjj\" (UniqueName: \"kubernetes.io/projected/b7d82a4a-3947-4645-982a-654a8101ba55-kube-api-access-dhsjj\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.877141 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.877170 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-encryption-config\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.877201 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86321d30-1e05-467f-bdac-35cefcfdd789-serving-cert\") pod \"console-operator-58897d9998-czdg9\" (UID: \"86321d30-1e05-467f-bdac-35cefcfdd789\") " pod="openshift-console-operator/console-operator-58897d9998-czdg9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.877230 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c142cb69-debd-49df-9c48-50cf5e5aa740-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-76cwz\" (UID: \"c142cb69-debd-49df-9c48-50cf5e5aa740\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.877258 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-trusted-ca-bundle\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.877286 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-audit-dir\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.877310 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.877335 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f5f23c6e-7172-4c6c-87f0-8fa5edfa8248-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sms5d\" (UID: \"f5f23c6e-7172-4c6c-87f0-8fa5edfa8248\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.877373 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b7d82a4a-3947-4645-982a-654a8101ba55-audit\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.877401 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.877431 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg52t\" (UniqueName: \"kubernetes.io/projected/3c6e2361-7fb0-4e89-8747-ae1a46cb0e65-kube-api-access-gg52t\") pod \"openshift-controller-manager-operator-756b6f6bc6-r9ggh\" (UID: \"3c6e2361-7fb0-4e89-8747-ae1a46cb0e65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.877465 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr2x4\" (UniqueName: \"kubernetes.io/projected/999a793b-4d2e-41bb-bd09-cd8ca31cef0c-kube-api-access-rr2x4\") pod \"machine-api-operator-5694c8668f-9p52n\" (UID: \"999a793b-4d2e-41bb-bd09-cd8ca31cef0c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.878324 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.879103 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/690f4dfb-2e2f-4419-a331-3a26e0dac535-service-ca-bundle\") pod \"authentication-operator-69f744f599-k9t86\" (UID: \"690f4dfb-2e2f-4419-a331-3a26e0dac535\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.881615 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690f4dfb-2e2f-4419-a331-3a26e0dac535-config\") pod \"authentication-operator-69f744f599-k9t86\" (UID: \"690f4dfb-2e2f-4419-a331-3a26e0dac535\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.882528 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.883522 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.883532 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-console-config\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884037 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884345 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/999a793b-4d2e-41bb-bd09-cd8ca31cef0c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9p52n\" (UID: \"999a793b-4d2e-41bb-bd09-cd8ca31cef0c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884389 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b7d82a4a-3947-4645-982a-654a8101ba55-etcd-client\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884424 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/999a793b-4d2e-41bb-bd09-cd8ca31cef0c-images\") pod \"machine-api-operator-5694c8668f-9p52n\" (UID: \"999a793b-4d2e-41bb-bd09-cd8ca31cef0c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884456 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b7d82a4a-3947-4645-982a-654a8101ba55-etcd-serving-ca\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884494 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-etcd-client\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884544 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w7kq\" (UniqueName: \"kubernetes.io/projected/2ee445c4-ffb8-44f0-8cff-19563c07b525-kube-api-access-6w7kq\") pod \"ingress-operator-5b745b69d9-q5z5p\" (UID: \"2ee445c4-ffb8-44f0-8cff-19563c07b525\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884600 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3da90cf7-acf9-4fa5-8a59-1f444dd5a619-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9lm2r\" (UID: \"3da90cf7-acf9-4fa5-8a59-1f444dd5a619\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884636 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d82a4a-3947-4645-982a-654a8101ba55-config\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884667 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86321d30-1e05-467f-bdac-35cefcfdd789-trusted-ca\") pod \"console-operator-58897d9998-czdg9\" (UID: \"86321d30-1e05-467f-bdac-35cefcfdd789\") " pod="openshift-console-operator/console-operator-58897d9998-czdg9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884704 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtmnj\" (UniqueName: \"kubernetes.io/projected/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-kube-api-access-rtmnj\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884734 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884768 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884855 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-audit-policies\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884895 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc9rk\" (UniqueName: \"kubernetes.io/projected/3da90cf7-acf9-4fa5-8a59-1f444dd5a619-kube-api-access-kc9rk\") pod \"openshift-apiserver-operator-796bbdcf4f-9lm2r\" (UID: \"3da90cf7-acf9-4fa5-8a59-1f444dd5a619\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884927 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b7d82a4a-3947-4645-982a-654a8101ba55-node-pullsecrets\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.884986 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b7d82a4a-3947-4645-982a-654a8101ba55-image-import-ca\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885015 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-audit-policies\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885046 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bda9acd8-7428-4ed2-aa1c-54c759b39e97-metrics-certs\") pod \"router-default-5444994796-lhzt4\" (UID: \"bda9acd8-7428-4ed2-aa1c-54c759b39e97\") " pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885082 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885114 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-serving-cert\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885114 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/690f4dfb-2e2f-4419-a331-3a26e0dac535-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-k9t86\" (UID: \"690f4dfb-2e2f-4419-a331-3a26e0dac535\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885142 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f23c6e-7172-4c6c-87f0-8fa5edfa8248-serving-cert\") pod \"openshift-config-operator-7777fb866f-sms5d\" (UID: \"f5f23c6e-7172-4c6c-87f0-8fa5edfa8248\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885202 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ztk7\" (UniqueName: \"kubernetes.io/projected/fff3e916-b71f-44c3-a4ac-a78efc547a28-kube-api-access-5ztk7\") pod \"route-controller-manager-6576b87f9c-96ctn\" (UID: \"fff3e916-b71f-44c3-a4ac-a78efc547a28\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885232 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ee445c4-ffb8-44f0-8cff-19563c07b525-metrics-tls\") pod \"ingress-operator-5b745b69d9-q5z5p\" (UID: \"2ee445c4-ffb8-44f0-8cff-19563c07b525\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885267 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7d82a4a-3947-4645-982a-654a8101ba55-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885293 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ee445c4-ffb8-44f0-8cff-19563c07b525-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q5z5p\" (UID: \"2ee445c4-ffb8-44f0-8cff-19563c07b525\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885325 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885359 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vnph\" (UniqueName: \"kubernetes.io/projected/690f4dfb-2e2f-4419-a331-3a26e0dac535-kube-api-access-4vnph\") pod \"authentication-operator-69f744f599-k9t86\" (UID: \"690f4dfb-2e2f-4419-a331-3a26e0dac535\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885387 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff3e916-b71f-44c3-a4ac-a78efc547a28-config\") pod \"route-controller-manager-6576b87f9c-96ctn\" (UID: \"fff3e916-b71f-44c3-a4ac-a78efc547a28\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885417 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltqt7\" (UniqueName: \"kubernetes.io/projected/ac81eab9-9f57-48f0-a71d-0bc6087eb8d8-kube-api-access-ltqt7\") pod \"dns-operator-744455d44c-dmlhp\" (UID: \"ac81eab9-9f57-48f0-a71d-0bc6087eb8d8\") " pod="openshift-dns-operator/dns-operator-744455d44c-dmlhp" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885446 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g79l\" (UniqueName: \"kubernetes.io/projected/f5f23c6e-7172-4c6c-87f0-8fa5edfa8248-kube-api-access-9g79l\") pod \"openshift-config-operator-7777fb866f-sms5d\" (UID: \"f5f23c6e-7172-4c6c-87f0-8fa5edfa8248\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885479 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-service-ca\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885536 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7d82a4a-3947-4645-982a-654a8101ba55-audit-dir\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885569 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-console-serving-cert\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885601 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-oauth-serving-cert\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885628 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ee445c4-ffb8-44f0-8cff-19563c07b525-trusted-ca\") pod \"ingress-operator-5b745b69d9-q5z5p\" (UID: \"2ee445c4-ffb8-44f0-8cff-19563c07b525\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885646 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-audit-dir\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885659 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jwwz\" (UniqueName: \"kubernetes.io/projected/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-kube-api-access-8jwwz\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885737 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86321d30-1e05-467f-bdac-35cefcfdd789-config\") pod \"console-operator-58897d9998-czdg9\" (UID: \"86321d30-1e05-467f-bdac-35cefcfdd789\") " pod="openshift-console-operator/console-operator-58897d9998-czdg9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885777 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/999a793b-4d2e-41bb-bd09-cd8ca31cef0c-config\") pod \"machine-api-operator-5694c8668f-9p52n\" (UID: \"999a793b-4d2e-41bb-bd09-cd8ca31cef0c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.885911 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bda9acd8-7428-4ed2-aa1c-54c759b39e97-service-ca-bundle\") pod \"router-default-5444994796-lhzt4\" (UID: \"bda9acd8-7428-4ed2-aa1c-54c759b39e97\") " pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.886042 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.887401 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rlctl"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.887546 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fff3e916-b71f-44c3-a4ac-a78efc547a28-client-ca\") pod \"route-controller-manager-6576b87f9c-96ctn\" (UID: \"fff3e916-b71f-44c3-a4ac-a78efc547a28\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.887999 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-w8x8x"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.888466 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.888477 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b7d82a4a-3947-4645-982a-654a8101ba55-encryption-config\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.888483 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qz7nq"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.888615 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.889012 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.889013 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/999a793b-4d2e-41bb-bd09-cd8ca31cef0c-images\") pod \"machine-api-operator-5694c8668f-9p52n\" (UID: \"999a793b-4d2e-41bb-bd09-cd8ca31cef0c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.889804 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zbhrb"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.890085 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b7d82a4a-3947-4645-982a-654a8101ba55-node-pullsecrets\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.890615 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86321d30-1e05-467f-bdac-35cefcfdd789-trusted-ca\") pod \"console-operator-58897d9998-czdg9\" (UID: \"86321d30-1e05-467f-bdac-35cefcfdd789\") " pod="openshift-console-operator/console-operator-58897d9998-czdg9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.891023 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-audit-policies\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.891033 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7d82a4a-3947-4645-982a-654a8101ba55-trusted-ca-bundle\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.891490 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fff3e916-b71f-44c3-a4ac-a78efc547a28-serving-cert\") pod \"route-controller-manager-6576b87f9c-96ctn\" (UID: \"fff3e916-b71f-44c3-a4ac-a78efc547a28\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.891912 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-console-oauth-config\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.891921 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b7d82a4a-3947-4645-982a-654a8101ba55-etcd-client\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.892159 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.892355 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff3e916-b71f-44c3-a4ac-a78efc547a28-config\") pod \"route-controller-manager-6576b87f9c-96ctn\" (UID: \"fff3e916-b71f-44c3-a4ac-a78efc547a28\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.892678 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b7d82a4a-3947-4645-982a-654a8101ba55-etcd-serving-ca\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.892796 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.892903 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7d82a4a-3947-4645-982a-654a8101ba55-audit-dir\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.893046 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qz7nq" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.893232 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b7d82a4a-3947-4645-982a-654a8101ba55-audit\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.893267 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbhrb" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.894319 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/999a793b-4d2e-41bb-bd09-cd8ca31cef0c-config\") pod \"machine-api-operator-5694c8668f-9p52n\" (UID: \"999a793b-4d2e-41bb-bd09-cd8ca31cef0c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.895083 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-trusted-ca-bundle\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.895452 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.896175 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-oauth-serving-cert\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.896332 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86321d30-1e05-467f-bdac-35cefcfdd789-config\") pod \"console-operator-58897d9998-czdg9\" (UID: \"86321d30-1e05-467f-bdac-35cefcfdd789\") " pod="openshift-console-operator/console-operator-58897d9998-czdg9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.896430 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d82a4a-3947-4645-982a-654a8101ba55-config\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.897142 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.897153 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da90cf7-acf9-4fa5-8a59-1f444dd5a619-config\") pod \"openshift-apiserver-operator-796bbdcf4f-9lm2r\" (UID: \"3da90cf7-acf9-4fa5-8a59-1f444dd5a619\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.897586 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.897887 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.898065 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-f9db5"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.898164 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/999a793b-4d2e-41bb-bd09-cd8ca31cef0c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9p52n\" (UID: \"999a793b-4d2e-41bb-bd09-cd8ca31cef0c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.899222 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-f9db5" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.899883 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-service-ca\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.900290 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.900478 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86321d30-1e05-467f-bdac-35cefcfdd789-serving-cert\") pod \"console-operator-58897d9998-czdg9\" (UID: \"86321d30-1e05-467f-bdac-35cefcfdd789\") " pod="openshift-console-operator/console-operator-58897d9998-czdg9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.901645 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3da90cf7-acf9-4fa5-8a59-1f444dd5a619-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-9lm2r\" (UID: \"3da90cf7-acf9-4fa5-8a59-1f444dd5a619\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.902321 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.902420 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.902745 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7d82a4a-3947-4645-982a-654a8101ba55-serving-cert\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.903088 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-console-serving-cert\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.903129 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.903752 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.904919 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b7d82a4a-3947-4645-982a-654a8101ba55-image-import-ca\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.904975 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.905054 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.905482 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.906550 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29524320-khn5f"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.908412 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.909373 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.909946 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.910056 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.910257 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.909985 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/690f4dfb-2e2f-4419-a331-3a26e0dac535-serving-cert\") pod \"authentication-operator-69f744f599-k9t86\" (UID: \"690f4dfb-2e2f-4419-a331-3a26e0dac535\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.928598 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.926534 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.927029 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.930567 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.930685 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.937473 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.940500 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dc72t"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.945651 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.951647 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-92549"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.951694 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wfn7l"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.951857 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.961583 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.962106 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wfn7l" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.962319 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.962689 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.963376 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.964150 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-w6fd4"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.964217 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.964904 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29524320-khn5f" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.978827 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.979042 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f8slh"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.980375 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.982358 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bf56f"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.984056 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.985529 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988004 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fcce0511-0fb0-48e9-927c-c259595a806b-srv-cert\") pod \"olm-operator-6b444d44fb-hwqsn\" (UID: \"fcce0511-0fb0-48e9-927c-c259595a806b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988064 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac81eab9-9f57-48f0-a71d-0bc6087eb8d8-metrics-tls\") pod \"dns-operator-744455d44c-dmlhp\" (UID: \"ac81eab9-9f57-48f0-a71d-0bc6087eb8d8\") " pod="openshift-dns-operator/dns-operator-744455d44c-dmlhp" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988096 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76110eca-3d21-477d-9656-965eaa768c21-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dc72t\" (UID: \"76110eca-3d21-477d-9656-965eaa768c21\") " pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988107 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-czdg9"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988126 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzrrx\" (UniqueName: \"kubernetes.io/projected/8334d428-f7e6-41d5-806b-b5471a354fe8-kube-api-access-rzrrx\") pod \"package-server-manager-789f6589d5-97jhp\" (UID: \"8334d428-f7e6-41d5-806b-b5471a354fe8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988155 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12aeb4bf-6d3b-4c0e-a121-20c286762e7b-config\") pod \"kube-controller-manager-operator-78b949d7b-ctmf9\" (UID: \"12aeb4bf-6d3b-4c0e-a121-20c286762e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988208 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c142cb69-debd-49df-9c48-50cf5e5aa740-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-76cwz\" (UID: \"c142cb69-debd-49df-9c48-50cf5e5aa740\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988237 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-tmpfs\") pod \"packageserver-d55dfcdfc-qh6r7\" (UID: \"b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988287 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12aeb4bf-6d3b-4c0e-a121-20c286762e7b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ctmf9\" (UID: \"12aeb4bf-6d3b-4c0e-a121-20c286762e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988332 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-encryption-config\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988380 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f5f23c6e-7172-4c6c-87f0-8fa5edfa8248-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sms5d\" (UID: \"f5f23c6e-7172-4c6c-87f0-8fa5edfa8248\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988409 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg52t\" (UniqueName: \"kubernetes.io/projected/3c6e2361-7fb0-4e89-8747-ae1a46cb0e65-kube-api-access-gg52t\") pod \"openshift-controller-manager-operator-756b6f6bc6-r9ggh\" (UID: \"3c6e2361-7fb0-4e89-8747-ae1a46cb0e65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988437 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hz55\" (UniqueName: \"kubernetes.io/projected/ac92fb7e-c51b-4cd9-951a-bd6153cfb0f0-kube-api-access-7hz55\") pod \"multus-admission-controller-857f4d67dd-f9db5\" (UID: \"ac92fb7e-c51b-4cd9-951a-bd6153cfb0f0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f9db5" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988467 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76110eca-3d21-477d-9656-965eaa768c21-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dc72t\" (UID: \"76110eca-3d21-477d-9656-965eaa768c21\") " pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988556 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c7867337-27db-4c61-87e3-fd2f957a0381-signing-cabundle\") pod \"service-ca-9c57cc56f-wfn7l\" (UID: \"c7867337-27db-4c61-87e3-fd2f957a0381\") " pod="openshift-service-ca/service-ca-9c57cc56f-wfn7l" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988594 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f23c6e-7172-4c6c-87f0-8fa5edfa8248-serving-cert\") pod \"openshift-config-operator-7777fb866f-sms5d\" (UID: \"f5f23c6e-7172-4c6c-87f0-8fa5edfa8248\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988624 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqrzw\" (UniqueName: \"kubernetes.io/projected/fcce0511-0fb0-48e9-927c-c259595a806b-kube-api-access-fqrzw\") pod \"olm-operator-6b444d44fb-hwqsn\" (UID: \"fcce0511-0fb0-48e9-927c-c259595a806b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988668 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ee445c4-ffb8-44f0-8cff-19563c07b525-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q5z5p\" (UID: \"2ee445c4-ffb8-44f0-8cff-19563c07b525\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988697 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988729 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g79l\" (UniqueName: \"kubernetes.io/projected/f5f23c6e-7172-4c6c-87f0-8fa5edfa8248-kube-api-access-9g79l\") pod \"openshift-config-operator-7777fb866f-sms5d\" (UID: \"f5f23c6e-7172-4c6c-87f0-8fa5edfa8248\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988757 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwqpc\" (UniqueName: \"kubernetes.io/projected/ea799468-c418-4b46-b279-cb78c37b2ce3-kube-api-access-fwqpc\") pod \"machine-config-controller-84d6567774-jsk5w\" (UID: \"ea799468-c418-4b46-b279-cb78c37b2ce3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988785 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ee445c4-ffb8-44f0-8cff-19563c07b525-trusted-ca\") pod \"ingress-operator-5b745b69d9-q5z5p\" (UID: \"2ee445c4-ffb8-44f0-8cff-19563c07b525\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988812 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqgp2\" (UniqueName: \"kubernetes.io/projected/76110eca-3d21-477d-9656-965eaa768c21-kube-api-access-nqgp2\") pod \"marketplace-operator-79b997595-dc72t\" (UID: \"76110eca-3d21-477d-9656-965eaa768c21\") " pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988860 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bda9acd8-7428-4ed2-aa1c-54c759b39e97-service-ca-bundle\") pod \"router-default-5444994796-lhzt4\" (UID: \"bda9acd8-7428-4ed2-aa1c-54c759b39e97\") " pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988892 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6e2361-7fb0-4e89-8747-ae1a46cb0e65-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-r9ggh\" (UID: \"3c6e2361-7fb0-4e89-8747-ae1a46cb0e65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988915 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bda9acd8-7428-4ed2-aa1c-54c759b39e97-stats-auth\") pod \"router-default-5444994796-lhzt4\" (UID: \"bda9acd8-7428-4ed2-aa1c-54c759b39e97\") " pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988942 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c6e2361-7fb0-4e89-8747-ae1a46cb0e65-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-r9ggh\" (UID: \"3c6e2361-7fb0-4e89-8747-ae1a46cb0e65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988968 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bda9acd8-7428-4ed2-aa1c-54c759b39e97-default-certificate\") pod \"router-default-5444994796-lhzt4\" (UID: \"bda9acd8-7428-4ed2-aa1c-54c759b39e97\") " pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.988996 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c142cb69-debd-49df-9c48-50cf5e5aa740-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-76cwz\" (UID: \"c142cb69-debd-49df-9c48-50cf5e5aa740\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989022 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12aeb4bf-6d3b-4c0e-a121-20c286762e7b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ctmf9\" (UID: \"12aeb4bf-6d3b-4c0e-a121-20c286762e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989045 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea799468-c418-4b46-b279-cb78c37b2ce3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jsk5w\" (UID: \"ea799468-c418-4b46-b279-cb78c37b2ce3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989079 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g47g\" (UniqueName: \"kubernetes.io/projected/bda9acd8-7428-4ed2-aa1c-54c759b39e97-kube-api-access-7g47g\") pod \"router-default-5444994796-lhzt4\" (UID: \"bda9acd8-7428-4ed2-aa1c-54c759b39e97\") " pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989108 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/78b30adc-193b-4784-aa76-522479b866dc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qz7nq\" (UID: \"78b30adc-193b-4784-aa76-522479b866dc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qz7nq" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989138 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-apiservice-cert\") pod \"packageserver-d55dfcdfc-qh6r7\" (UID: \"b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989171 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vlnr\" (UniqueName: \"kubernetes.io/projected/41a74d6d-5d02-4231-abc9-3adda5eb4a1e-kube-api-access-7vlnr\") pod \"migrator-59844c95c7-zbhrb\" (UID: \"41a74d6d-5d02-4231-abc9-3adda5eb4a1e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbhrb" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989202 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87lrx\" (UniqueName: \"kubernetes.io/projected/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-kube-api-access-87lrx\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989226 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e7b2089-a948-4a74-8bab-7eae32913dbd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g5gxr\" (UID: \"2e7b2089-a948-4a74-8bab-7eae32913dbd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989252 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-audit-dir\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989277 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fcce0511-0fb0-48e9-927c-c259595a806b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hwqsn\" (UID: \"fcce0511-0fb0-48e9-927c-c259595a806b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989310 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxlql\" (UniqueName: \"kubernetes.io/projected/78b30adc-193b-4784-aa76-522479b866dc-kube-api-access-zxlql\") pod \"control-plane-machine-set-operator-78cbb6b69f-qz7nq\" (UID: \"78b30adc-193b-4784-aa76-522479b866dc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qz7nq" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989383 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac92fb7e-c51b-4cd9-951a-bd6153cfb0f0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-f9db5\" (UID: \"ac92fb7e-c51b-4cd9-951a-bd6153cfb0f0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f9db5" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989416 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c142cb69-debd-49df-9c48-50cf5e5aa740-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-76cwz\" (UID: \"c142cb69-debd-49df-9c48-50cf5e5aa740\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989445 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj5st\" (UniqueName: \"kubernetes.io/projected/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-kube-api-access-jj5st\") pod \"packageserver-d55dfcdfc-qh6r7\" (UID: \"b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989496 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjl2m\" (UniqueName: \"kubernetes.io/projected/2e7b2089-a948-4a74-8bab-7eae32913dbd-kube-api-access-gjl2m\") pod \"kube-storage-version-migrator-operator-b67b599dd-g5gxr\" (UID: \"2e7b2089-a948-4a74-8bab-7eae32913dbd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989551 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-279wd\" (UniqueName: \"kubernetes.io/projected/c7867337-27db-4c61-87e3-fd2f957a0381-kube-api-access-279wd\") pod \"service-ca-9c57cc56f-wfn7l\" (UID: \"c7867337-27db-4c61-87e3-fd2f957a0381\") " pod="openshift-service-ca/service-ca-9c57cc56f-wfn7l" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989601 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-etcd-client\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989634 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w7kq\" (UniqueName: \"kubernetes.io/projected/2ee445c4-ffb8-44f0-8cff-19563c07b525-kube-api-access-6w7kq\") pod \"ingress-operator-5b745b69d9-q5z5p\" (UID: \"2ee445c4-ffb8-44f0-8cff-19563c07b525\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989500 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f5f23c6e-7172-4c6c-87f0-8fa5edfa8248-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sms5d\" (UID: \"f5f23c6e-7172-4c6c-87f0-8fa5edfa8248\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989660 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989781 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-audit-dir\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.989850 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-webhook-cert\") pod \"packageserver-d55dfcdfc-qh6r7\" (UID: \"b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.990166 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.990190 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e7b2089-a948-4a74-8bab-7eae32913dbd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g5gxr\" (UID: \"2e7b2089-a948-4a74-8bab-7eae32913dbd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.990233 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-audit-policies\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.990305 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bda9acd8-7428-4ed2-aa1c-54c759b39e97-metrics-certs\") pod \"router-default-5444994796-lhzt4\" (UID: \"bda9acd8-7428-4ed2-aa1c-54c759b39e97\") " pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.990338 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea799468-c418-4b46-b279-cb78c37b2ce3-proxy-tls\") pod \"machine-config-controller-84d6567774-jsk5w\" (UID: \"ea799468-c418-4b46-b279-cb78c37b2ce3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.990520 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c7867337-27db-4c61-87e3-fd2f957a0381-signing-key\") pod \"service-ca-9c57cc56f-wfn7l\" (UID: \"c7867337-27db-4c61-87e3-fd2f957a0381\") " pod="openshift-service-ca/service-ca-9c57cc56f-wfn7l" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.990551 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-serving-cert\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.990579 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ee445c4-ffb8-44f0-8cff-19563c07b525-metrics-tls\") pod \"ingress-operator-5b745b69d9-q5z5p\" (UID: \"2ee445c4-ffb8-44f0-8cff-19563c07b525\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.990656 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltqt7\" (UniqueName: \"kubernetes.io/projected/ac81eab9-9f57-48f0-a71d-0bc6087eb8d8-kube-api-access-ltqt7\") pod \"dns-operator-744455d44c-dmlhp\" (UID: \"ac81eab9-9f57-48f0-a71d-0bc6087eb8d8\") " pod="openshift-dns-operator/dns-operator-744455d44c-dmlhp" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.991053 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.992445 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-audit-policies\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.992879 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8334d428-f7e6-41d5-806b-b5471a354fe8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-97jhp\" (UID: \"8334d428-f7e6-41d5-806b-b5471a354fe8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.993941 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bda9acd8-7428-4ed2-aa1c-54c759b39e97-service-ca-bundle\") pod \"router-default-5444994796-lhzt4\" (UID: \"bda9acd8-7428-4ed2-aa1c-54c759b39e97\") " pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.994140 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-etcd-client\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.994564 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bda9acd8-7428-4ed2-aa1c-54c759b39e97-metrics-certs\") pod \"router-default-5444994796-lhzt4\" (UID: \"bda9acd8-7428-4ed2-aa1c-54c759b39e97\") " pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.994701 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6e2361-7fb0-4e89-8747-ae1a46cb0e65-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-r9ggh\" (UID: \"3c6e2361-7fb0-4e89-8747-ae1a46cb0e65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.995087 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c6e2361-7fb0-4e89-8747-ae1a46cb0e65-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-r9ggh\" (UID: \"3c6e2361-7fb0-4e89-8747-ae1a46cb0e65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.996062 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-serving-cert\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.996432 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bda9acd8-7428-4ed2-aa1c-54c759b39e97-default-certificate\") pod \"router-default-5444994796-lhzt4\" (UID: \"bda9acd8-7428-4ed2-aa1c-54c759b39e97\") " pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.996549 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.996613 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-encryption-config\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.998013 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rm422"] Feb 19 00:09:54 crc kubenswrapper[4825]: I0219 00:09:54.999644 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bda9acd8-7428-4ed2-aa1c-54c759b39e97-stats-auth\") pod \"router-default-5444994796-lhzt4\" (UID: \"bda9acd8-7428-4ed2-aa1c-54c759b39e97\") " pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.001589 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dmlhp"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.002716 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.004434 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.006657 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sms5d"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.008570 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-55sxl"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.010181 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.010323 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-55sxl" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.012308 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9p52n"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.013779 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-88d6d"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.014911 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-w8x8x"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.016145 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.016784 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.018108 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f8slh"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.020106 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.021874 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-k9t86"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.023155 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.025617 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.027617 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qz7nq"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.029577 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.033775 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bwrjg"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.035563 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rlctl"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.038779 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.039225 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bf56f" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.041714 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.042375 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-88d6d" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.042876 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zbhrb"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.051429 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dc72t"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.056700 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.056779 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.056804 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-tlj4k"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.059760 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tlj4k" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.062061 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f23c6e-7172-4c6c-87f0-8fa5edfa8248-serving-cert\") pod \"openshift-config-operator-7777fb866f-sms5d\" (UID: \"f5f23c6e-7172-4c6c-87f0-8fa5edfa8248\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.062141 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xbfwq"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.062307 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.063564 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xbfwq" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.064299 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-55sxl"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.083349 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.086039 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.086092 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.086109 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.086706 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.086732 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.086743 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.093864 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-f9db5"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.096264 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wfn7l"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097189 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vlnr\" (UniqueName: \"kubernetes.io/projected/41a74d6d-5d02-4231-abc9-3adda5eb4a1e-kube-api-access-7vlnr\") pod \"migrator-59844c95c7-zbhrb\" (UID: \"41a74d6d-5d02-4231-abc9-3adda5eb4a1e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbhrb" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097247 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e7b2089-a948-4a74-8bab-7eae32913dbd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g5gxr\" (UID: \"2e7b2089-a948-4a74-8bab-7eae32913dbd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097272 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fcce0511-0fb0-48e9-927c-c259595a806b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hwqsn\" (UID: \"fcce0511-0fb0-48e9-927c-c259595a806b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097300 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxlql\" (UniqueName: \"kubernetes.io/projected/78b30adc-193b-4784-aa76-522479b866dc-kube-api-access-zxlql\") pod \"control-plane-machine-set-operator-78cbb6b69f-qz7nq\" (UID: \"78b30adc-193b-4784-aa76-522479b866dc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qz7nq" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097353 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac92fb7e-c51b-4cd9-951a-bd6153cfb0f0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-f9db5\" (UID: \"ac92fb7e-c51b-4cd9-951a-bd6153cfb0f0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f9db5" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097375 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj5st\" (UniqueName: \"kubernetes.io/projected/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-kube-api-access-jj5st\") pod \"packageserver-d55dfcdfc-qh6r7\" (UID: \"b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097404 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjl2m\" (UniqueName: \"kubernetes.io/projected/2e7b2089-a948-4a74-8bab-7eae32913dbd-kube-api-access-gjl2m\") pod \"kube-storage-version-migrator-operator-b67b599dd-g5gxr\" (UID: \"2e7b2089-a948-4a74-8bab-7eae32913dbd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097458 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-279wd\" (UniqueName: \"kubernetes.io/projected/c7867337-27db-4c61-87e3-fd2f957a0381-kube-api-access-279wd\") pod \"service-ca-9c57cc56f-wfn7l\" (UID: \"c7867337-27db-4c61-87e3-fd2f957a0381\") " pod="openshift-service-ca/service-ca-9c57cc56f-wfn7l" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097522 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-webhook-cert\") pod \"packageserver-d55dfcdfc-qh6r7\" (UID: \"b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097542 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e7b2089-a948-4a74-8bab-7eae32913dbd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g5gxr\" (UID: \"2e7b2089-a948-4a74-8bab-7eae32913dbd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097579 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea799468-c418-4b46-b279-cb78c37b2ce3-proxy-tls\") pod \"machine-config-controller-84d6567774-jsk5w\" (UID: \"ea799468-c418-4b46-b279-cb78c37b2ce3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097602 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c7867337-27db-4c61-87e3-fd2f957a0381-signing-key\") pod \"service-ca-9c57cc56f-wfn7l\" (UID: \"c7867337-27db-4c61-87e3-fd2f957a0381\") " pod="openshift-service-ca/service-ca-9c57cc56f-wfn7l" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097695 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8334d428-f7e6-41d5-806b-b5471a354fe8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-97jhp\" (UID: \"8334d428-f7e6-41d5-806b-b5471a354fe8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097731 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fcce0511-0fb0-48e9-927c-c259595a806b-srv-cert\") pod \"olm-operator-6b444d44fb-hwqsn\" (UID: \"fcce0511-0fb0-48e9-927c-c259595a806b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097776 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76110eca-3d21-477d-9656-965eaa768c21-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dc72t\" (UID: \"76110eca-3d21-477d-9656-965eaa768c21\") " pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097798 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzrrx\" (UniqueName: \"kubernetes.io/projected/8334d428-f7e6-41d5-806b-b5471a354fe8-kube-api-access-rzrrx\") pod \"package-server-manager-789f6589d5-97jhp\" (UID: \"8334d428-f7e6-41d5-806b-b5471a354fe8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097839 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-tmpfs\") pod \"packageserver-d55dfcdfc-qh6r7\" (UID: \"b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097915 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hz55\" (UniqueName: \"kubernetes.io/projected/ac92fb7e-c51b-4cd9-951a-bd6153cfb0f0-kube-api-access-7hz55\") pod \"multus-admission-controller-857f4d67dd-f9db5\" (UID: \"ac92fb7e-c51b-4cd9-951a-bd6153cfb0f0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f9db5" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.097936 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76110eca-3d21-477d-9656-965eaa768c21-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dc72t\" (UID: \"76110eca-3d21-477d-9656-965eaa768c21\") " pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.098002 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c7867337-27db-4c61-87e3-fd2f957a0381-signing-cabundle\") pod \"service-ca-9c57cc56f-wfn7l\" (UID: \"c7867337-27db-4c61-87e3-fd2f957a0381\") " pod="openshift-service-ca/service-ca-9c57cc56f-wfn7l" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.098039 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqrzw\" (UniqueName: \"kubernetes.io/projected/fcce0511-0fb0-48e9-927c-c259595a806b-kube-api-access-fqrzw\") pod \"olm-operator-6b444d44fb-hwqsn\" (UID: \"fcce0511-0fb0-48e9-927c-c259595a806b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.098105 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwqpc\" (UniqueName: \"kubernetes.io/projected/ea799468-c418-4b46-b279-cb78c37b2ce3-kube-api-access-fwqpc\") pod \"machine-config-controller-84d6567774-jsk5w\" (UID: \"ea799468-c418-4b46-b279-cb78c37b2ce3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.098151 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqgp2\" (UniqueName: \"kubernetes.io/projected/76110eca-3d21-477d-9656-965eaa768c21-kube-api-access-nqgp2\") pod \"marketplace-operator-79b997595-dc72t\" (UID: \"76110eca-3d21-477d-9656-965eaa768c21\") " pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.098228 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea799468-c418-4b46-b279-cb78c37b2ce3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jsk5w\" (UID: \"ea799468-c418-4b46-b279-cb78c37b2ce3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.098258 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/78b30adc-193b-4784-aa76-522479b866dc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qz7nq\" (UID: \"78b30adc-193b-4784-aa76-522479b866dc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qz7nq" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.098280 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-apiservice-cert\") pod \"packageserver-d55dfcdfc-qh6r7\" (UID: \"b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.103346 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-tmpfs\") pod \"packageserver-d55dfcdfc-qh6r7\" (UID: \"b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.103540 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xbfwq"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.105142 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ea799468-c418-4b46-b279-cb78c37b2ce3-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jsk5w\" (UID: \"ea799468-c418-4b46-b279-cb78c37b2ce3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.111287 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.117827 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.137759 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.162254 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.181050 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.190050 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ee445c4-ffb8-44f0-8cff-19563c07b525-metrics-tls\") pod \"ingress-operator-5b745b69d9-q5z5p\" (UID: \"2ee445c4-ffb8-44f0-8cff-19563c07b525\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.198174 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.217847 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.237783 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.243331 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac81eab9-9f57-48f0-a71d-0bc6087eb8d8-metrics-tls\") pod \"dns-operator-744455d44c-dmlhp\" (UID: \"ac81eab9-9f57-48f0-a71d-0bc6087eb8d8\") " pod="openshift-dns-operator/dns-operator-744455d44c-dmlhp" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.257619 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.278496 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.291589 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bwrjg"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.296651 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29524320-khn5f"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.298218 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.307681 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c142cb69-debd-49df-9c48-50cf5e5aa740-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-76cwz\" (UID: \"c142cb69-debd-49df-9c48-50cf5e5aa740\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.337078 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-88d6d"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.338318 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.338379 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.360355 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.361637 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c142cb69-debd-49df-9c48-50cf5e5aa740-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-76cwz\" (UID: \"c142cb69-debd-49df-9c48-50cf5e5aa740\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.363277 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ee445c4-ffb8-44f0-8cff-19563c07b525-trusted-ca\") pod \"ingress-operator-5b745b69d9-q5z5p\" (UID: \"2ee445c4-ffb8-44f0-8cff-19563c07b525\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.370797 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bf56f"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.378091 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.397744 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.403481 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12aeb4bf-6d3b-4c0e-a121-20c286762e7b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ctmf9\" (UID: \"12aeb4bf-6d3b-4c0e-a121-20c286762e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.418300 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.419363 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12aeb4bf-6d3b-4c0e-a121-20c286762e7b-config\") pod \"kube-controller-manager-operator-78b949d7b-ctmf9\" (UID: \"12aeb4bf-6d3b-4c0e-a121-20c286762e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.441005 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8"] Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.476996 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5dcc\" (UniqueName: \"kubernetes.io/projected/86321d30-1e05-467f-bdac-35cefcfdd789-kube-api-access-v5dcc\") pod \"console-operator-58897d9998-czdg9\" (UID: \"86321d30-1e05-467f-bdac-35cefcfdd789\") " pod="openshift-console-operator/console-operator-58897d9998-czdg9" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.492421 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jwwz\" (UniqueName: \"kubernetes.io/projected/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-kube-api-access-8jwwz\") pod \"oauth-openshift-558db77b4-w6fd4\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.514954 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhsjj\" (UniqueName: \"kubernetes.io/projected/b7d82a4a-3947-4645-982a-654a8101ba55-kube-api-access-dhsjj\") pod \"apiserver-76f77b778f-rm422\" (UID: \"b7d82a4a-3947-4645-982a-654a8101ba55\") " pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.517725 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 00:09:55 crc kubenswrapper[4825]: W0219 00:09:55.526547 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf1c5562_2a73_410f_b7b3_fe0edab3216b.slice/crio-2a0e139d6409ca233298f930b8c555ec86d55b8836249d96b9ba09631c3cc2d8 WatchSource:0}: Error finding container 2a0e139d6409ca233298f930b8c555ec86d55b8836249d96b9ba09631c3cc2d8: Status 404 returned error can't find the container with id 2a0e139d6409ca233298f930b8c555ec86d55b8836249d96b9ba09631c3cc2d8 Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.538140 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.557896 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.577674 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.597256 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.617174 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.638247 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.654049 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.657998 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.677634 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.709236 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.718993 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.738948 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.756821 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-czdg9" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.759174 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.768409 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.782681 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.813296 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr2x4\" (UniqueName: \"kubernetes.io/projected/999a793b-4d2e-41bb-bd09-cd8ca31cef0c-kube-api-access-rr2x4\") pod \"machine-api-operator-5694c8668f-9p52n\" (UID: \"999a793b-4d2e-41bb-bd09-cd8ca31cef0c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.832344 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ztk7\" (UniqueName: \"kubernetes.io/projected/fff3e916-b71f-44c3-a4ac-a78efc547a28-kube-api-access-5ztk7\") pod \"route-controller-manager-6576b87f9c-96ctn\" (UID: \"fff3e916-b71f-44c3-a4ac-a78efc547a28\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.856212 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtmnj\" (UniqueName: \"kubernetes.io/projected/1f6724cf-dc1e-44cc-8f59-91d3e8b00970-kube-api-access-rtmnj\") pod \"console-f9d7485db-92549\" (UID: \"1f6724cf-dc1e-44cc-8f59-91d3e8b00970\") " pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.865599 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.875241 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc9rk\" (UniqueName: \"kubernetes.io/projected/3da90cf7-acf9-4fa5-8a59-1f444dd5a619-kube-api-access-kc9rk\") pod \"openshift-apiserver-operator-796bbdcf4f-9lm2r\" (UID: \"3da90cf7-acf9-4fa5-8a59-1f444dd5a619\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.887276 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.894451 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vnph\" (UniqueName: \"kubernetes.io/projected/690f4dfb-2e2f-4419-a331-3a26e0dac535-kube-api-access-4vnph\") pod \"authentication-operator-69f744f599-k9t86\" (UID: \"690f4dfb-2e2f-4419-a331-3a26e0dac535\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.895330 4825 request.go:700] Waited for 1.002251867s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0 Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.898761 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.908589 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ea799468-c418-4b46-b279-cb78c37b2ce3-proxy-tls\") pod \"machine-config-controller-84d6567774-jsk5w\" (UID: \"ea799468-c418-4b46-b279-cb78c37b2ce3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.917496 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.938213 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.950414 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/78b30adc-193b-4784-aa76-522479b866dc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qz7nq\" (UID: \"78b30adc-193b-4784-aa76-522479b866dc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qz7nq" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.954715 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" event={"ID":"bf1c5562-2a73-410f-b7b3-fe0edab3216b","Type":"ContainerStarted","Data":"2a0e139d6409ca233298f930b8c555ec86d55b8836249d96b9ba09631c3cc2d8"} Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.956147 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" event={"ID":"f110da82-46c4-44d5-91f6-195be763d96f","Type":"ContainerStarted","Data":"d32fd3dc89551756a4336a7d7fa5b34baccbac6992b86cf9c7eb6db137f108a1"} Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.957416 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bf56f" event={"ID":"29be03fe-da22-41a7-9243-67aa815fbfb1","Type":"ContainerStarted","Data":"80e7708a8fcb878073e52f437ff8ea5d1d90cfef946764ec0cff9c9e172cc921"} Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.958112 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.958837 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29524320-khn5f" event={"ID":"7e0cdf1c-faf9-4a21-8beb-1b712bd266fc","Type":"ContainerStarted","Data":"97d08616e9be0202b076172ec8d13ac6f719ad54a18a896c48fbcba0e626b6e3"} Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.959861 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" event={"ID":"400b2fc6-03e6-48b5-9424-67ac6c34cfb1","Type":"ContainerStarted","Data":"256c7ff20c7e47d2999f935cb8c3c77a62b983d1d922e0aa70124631ddf277fa"} Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.977544 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-92549" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.977813 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 00:09:55 crc kubenswrapper[4825]: I0219 00:09:55.997725 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.017969 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.037352 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.040769 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-czdg9"] Feb 19 00:09:56 crc kubenswrapper[4825]: W0219 00:09:56.047408 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86321d30_1e05_467f_bdac_35cefcfdd789.slice/crio-8bc5466e01f05973f237509b67621a1a7b216ba049cb2d6d7850888a6846e9a6 WatchSource:0}: Error finding container 8bc5466e01f05973f237509b67621a1a7b216ba049cb2d6d7850888a6846e9a6: Status 404 returned error can't find the container with id 8bc5466e01f05973f237509b67621a1a7b216ba049cb2d6d7850888a6846e9a6 Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.056775 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.063916 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac92fb7e-c51b-4cd9-951a-bd6153cfb0f0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-f9db5\" (UID: \"ac92fb7e-c51b-4cd9-951a-bd6153cfb0f0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f9db5" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.078134 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.090199 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.097453 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.099196 4825 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.099264 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcce0511-0fb0-48e9-927c-c259595a806b-profile-collector-cert podName:fcce0511-0fb0-48e9-927c-c259595a806b nodeName:}" failed. No retries permitted until 2026-02-19 00:09:56.599239556 +0000 UTC m=+142.290205603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/fcce0511-0fb0-48e9-927c-c259595a806b-profile-collector-cert") pod "olm-operator-6b444d44fb-hwqsn" (UID: "fcce0511-0fb0-48e9-927c-c259595a806b") : failed to sync secret cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.099444 4825 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.099485 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-apiservice-cert podName:b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b nodeName:}" failed. No retries permitted until 2026-02-19 00:09:56.599473893 +0000 UTC m=+142.290439940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-apiservice-cert") pod "packageserver-d55dfcdfc-qh6r7" (UID: "b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b") : failed to sync secret cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.099548 4825 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.099582 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2e7b2089-a948-4a74-8bab-7eae32913dbd-config podName:2e7b2089-a948-4a74-8bab-7eae32913dbd nodeName:}" failed. No retries permitted until 2026-02-19 00:09:56.599574147 +0000 UTC m=+142.290540194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/2e7b2089-a948-4a74-8bab-7eae32913dbd-config") pod "kube-storage-version-migrator-operator-b67b599dd-g5gxr" (UID: "2e7b2089-a948-4a74-8bab-7eae32913dbd") : failed to sync configmap cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.103426 4825 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.103467 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c7867337-27db-4c61-87e3-fd2f957a0381-signing-cabundle podName:c7867337-27db-4c61-87e3-fd2f957a0381 nodeName:}" failed. No retries permitted until 2026-02-19 00:09:56.603457004 +0000 UTC m=+142.294423051 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/c7867337-27db-4c61-87e3-fd2f957a0381-signing-cabundle") pod "service-ca-9c57cc56f-wfn7l" (UID: "c7867337-27db-4c61-87e3-fd2f957a0381") : failed to sync configmap cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.103491 4825 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.103539 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76110eca-3d21-477d-9656-965eaa768c21-marketplace-operator-metrics podName:76110eca-3d21-477d-9656-965eaa768c21 nodeName:}" failed. No retries permitted until 2026-02-19 00:09:56.603531976 +0000 UTC m=+142.294498023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/76110eca-3d21-477d-9656-965eaa768c21-marketplace-operator-metrics") pod "marketplace-operator-79b997595-dc72t" (UID: "76110eca-3d21-477d-9656-965eaa768c21") : failed to sync secret cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.104765 4825 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.104867 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76110eca-3d21-477d-9656-965eaa768c21-marketplace-trusted-ca podName:76110eca-3d21-477d-9656-965eaa768c21 nodeName:}" failed. No retries permitted until 2026-02-19 00:09:56.604843759 +0000 UTC m=+142.295809796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/76110eca-3d21-477d-9656-965eaa768c21-marketplace-trusted-ca") pod "marketplace-operator-79b997595-dc72t" (UID: "76110eca-3d21-477d-9656-965eaa768c21") : failed to sync configmap cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.104900 4825 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.104927 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8334d428-f7e6-41d5-806b-b5471a354fe8-package-server-manager-serving-cert podName:8334d428-f7e6-41d5-806b-b5471a354fe8 nodeName:}" failed. No retries permitted until 2026-02-19 00:09:56.604920521 +0000 UTC m=+142.295886568 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/8334d428-f7e6-41d5-806b-b5471a354fe8-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-97jhp" (UID: "8334d428-f7e6-41d5-806b-b5471a354fe8") : failed to sync secret cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.104942 4825 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.104963 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcce0511-0fb0-48e9-927c-c259595a806b-srv-cert podName:fcce0511-0fb0-48e9-927c-c259595a806b nodeName:}" failed. No retries permitted until 2026-02-19 00:09:56.604957422 +0000 UTC m=+142.295923469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/fcce0511-0fb0-48e9-927c-c259595a806b-srv-cert") pod "olm-operator-6b444d44fb-hwqsn" (UID: "fcce0511-0fb0-48e9-927c-c259595a806b") : failed to sync secret cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.105326 4825 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.105423 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7867337-27db-4c61-87e3-fd2f957a0381-signing-key podName:c7867337-27db-4c61-87e3-fd2f957a0381 nodeName:}" failed. No retries permitted until 2026-02-19 00:09:56.605397596 +0000 UTC m=+142.296363643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/c7867337-27db-4c61-87e3-fd2f957a0381-signing-key") pod "service-ca-9c57cc56f-wfn7l" (UID: "c7867337-27db-4c61-87e3-fd2f957a0381") : failed to sync secret cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.106214 4825 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.106270 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-webhook-cert podName:b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b nodeName:}" failed. No retries permitted until 2026-02-19 00:09:56.606257565 +0000 UTC m=+142.297223712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-webhook-cert") pod "packageserver-d55dfcdfc-qh6r7" (UID: "b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b") : failed to sync secret cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.106296 4825 secret.go:188] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: E0219 00:09:56.106323 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e7b2089-a948-4a74-8bab-7eae32913dbd-serving-cert podName:2e7b2089-a948-4a74-8bab-7eae32913dbd nodeName:}" failed. No retries permitted until 2026-02-19 00:09:56.606316627 +0000 UTC m=+142.297282674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2e7b2089-a948-4a74-8bab-7eae32913dbd-serving-cert") pod "kube-storage-version-migrator-operator-b67b599dd-g5gxr" (UID: "2e7b2089-a948-4a74-8bab-7eae32913dbd") : failed to sync secret cache: timed out waiting for the condition Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.117588 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.140085 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.145716 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.157436 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.178243 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.197990 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.220933 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.227386 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-w6fd4"] Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.236968 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-rm422"] Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.237017 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r"] Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.241199 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.258265 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 00:09:56 crc kubenswrapper[4825]: W0219 00:09:56.264264 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da90cf7_acf9_4fa5_8a59_1f444dd5a619.slice/crio-601903235c890951776b6f06cfaf95498e3690fed7fa0c3540555e5706c0ce12 WatchSource:0}: Error finding container 601903235c890951776b6f06cfaf95498e3690fed7fa0c3540555e5706c0ce12: Status 404 returned error can't find the container with id 601903235c890951776b6f06cfaf95498e3690fed7fa0c3540555e5706c0ce12 Feb 19 00:09:56 crc kubenswrapper[4825]: W0219 00:09:56.266493 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7d82a4a_3947_4645_982a_654a8101ba55.slice/crio-a255c0b7249e804ee733d0d774b85467e07cc1086fa73107a6dc4f856a7de2a7 WatchSource:0}: Error finding container a255c0b7249e804ee733d0d774b85467e07cc1086fa73107a6dc4f856a7de2a7: Status 404 returned error can't find the container with id a255c0b7249e804ee733d0d774b85467e07cc1086fa73107a6dc4f856a7de2a7 Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.277575 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.290237 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-92549"] Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.298195 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.320250 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.337981 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.357972 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.377800 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.398220 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.423149 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.438007 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.456447 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.477389 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.497196 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.517387 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.537662 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.556703 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.577100 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 00:09:56 crc kubenswrapper[4825]: W0219 00:09:56.591493 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f6724cf_dc1e_44cc_8f59_91d3e8b00970.slice/crio-604e7915d19799a3ba1d181bc77cd2c40940c0ea07867b332cadb9c7a65f3f0e WatchSource:0}: Error finding container 604e7915d19799a3ba1d181bc77cd2c40940c0ea07867b332cadb9c7a65f3f0e: Status 404 returned error can't find the container with id 604e7915d19799a3ba1d181bc77cd2c40940c0ea07867b332cadb9c7a65f3f0e Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.596457 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn"] Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.601049 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-k9t86"] Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.601825 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9p52n"] Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.617398 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.634099 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-apiservice-cert\") pod \"packageserver-d55dfcdfc-qh6r7\" (UID: \"b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.634181 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e7b2089-a948-4a74-8bab-7eae32913dbd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g5gxr\" (UID: \"2e7b2089-a948-4a74-8bab-7eae32913dbd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.634207 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fcce0511-0fb0-48e9-927c-c259595a806b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hwqsn\" (UID: \"fcce0511-0fb0-48e9-927c-c259595a806b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.634282 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-webhook-cert\") pod \"packageserver-d55dfcdfc-qh6r7\" (UID: \"b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.634301 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e7b2089-a948-4a74-8bab-7eae32913dbd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g5gxr\" (UID: \"2e7b2089-a948-4a74-8bab-7eae32913dbd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.634322 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c7867337-27db-4c61-87e3-fd2f957a0381-signing-key\") pod \"service-ca-9c57cc56f-wfn7l\" (UID: \"c7867337-27db-4c61-87e3-fd2f957a0381\") " pod="openshift-service-ca/service-ca-9c57cc56f-wfn7l" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.634356 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8334d428-f7e6-41d5-806b-b5471a354fe8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-97jhp\" (UID: \"8334d428-f7e6-41d5-806b-b5471a354fe8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.634384 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fcce0511-0fb0-48e9-927c-c259595a806b-srv-cert\") pod \"olm-operator-6b444d44fb-hwqsn\" (UID: \"fcce0511-0fb0-48e9-927c-c259595a806b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.634406 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76110eca-3d21-477d-9656-965eaa768c21-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dc72t\" (UID: \"76110eca-3d21-477d-9656-965eaa768c21\") " pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.634464 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76110eca-3d21-477d-9656-965eaa768c21-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dc72t\" (UID: \"76110eca-3d21-477d-9656-965eaa768c21\") " pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.634494 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c7867337-27db-4c61-87e3-fd2f957a0381-signing-cabundle\") pod \"service-ca-9c57cc56f-wfn7l\" (UID: \"c7867337-27db-4c61-87e3-fd2f957a0381\") " pod="openshift-service-ca/service-ca-9c57cc56f-wfn7l" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.635951 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c7867337-27db-4c61-87e3-fd2f957a0381-signing-cabundle\") pod \"service-ca-9c57cc56f-wfn7l\" (UID: \"c7867337-27db-4c61-87e3-fd2f957a0381\") " pod="openshift-service-ca/service-ca-9c57cc56f-wfn7l" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.635988 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e7b2089-a948-4a74-8bab-7eae32913dbd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g5gxr\" (UID: \"2e7b2089-a948-4a74-8bab-7eae32913dbd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.637083 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76110eca-3d21-477d-9656-965eaa768c21-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dc72t\" (UID: \"76110eca-3d21-477d-9656-965eaa768c21\") " pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.638666 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.640006 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-webhook-cert\") pod \"packageserver-d55dfcdfc-qh6r7\" (UID: \"b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.640152 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8334d428-f7e6-41d5-806b-b5471a354fe8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-97jhp\" (UID: \"8334d428-f7e6-41d5-806b-b5471a354fe8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.641274 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-apiservice-cert\") pod \"packageserver-d55dfcdfc-qh6r7\" (UID: \"b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.641344 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fcce0511-0fb0-48e9-927c-c259595a806b-srv-cert\") pod \"olm-operator-6b444d44fb-hwqsn\" (UID: \"fcce0511-0fb0-48e9-927c-c259595a806b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.641726 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c7867337-27db-4c61-87e3-fd2f957a0381-signing-key\") pod \"service-ca-9c57cc56f-wfn7l\" (UID: \"c7867337-27db-4c61-87e3-fd2f957a0381\") " pod="openshift-service-ca/service-ca-9c57cc56f-wfn7l" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.642321 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76110eca-3d21-477d-9656-965eaa768c21-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dc72t\" (UID: \"76110eca-3d21-477d-9656-965eaa768c21\") " pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.643060 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e7b2089-a948-4a74-8bab-7eae32913dbd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g5gxr\" (UID: \"2e7b2089-a948-4a74-8bab-7eae32913dbd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr" Feb 19 00:09:56 crc kubenswrapper[4825]: W0219 00:09:56.649316 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfff3e916_b71f_44c3_a4ac_a78efc547a28.slice/crio-7d9382f10c20d16f8d16f470eb937adaf6184af8e9a7724e0a81c8e580d764b6 WatchSource:0}: Error finding container 7d9382f10c20d16f8d16f470eb937adaf6184af8e9a7724e0a81c8e580d764b6: Status 404 returned error can't find the container with id 7d9382f10c20d16f8d16f470eb937adaf6184af8e9a7724e0a81c8e580d764b6 Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.649409 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fcce0511-0fb0-48e9-927c-c259595a806b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hwqsn\" (UID: \"fcce0511-0fb0-48e9-927c-c259595a806b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" Feb 19 00:09:56 crc kubenswrapper[4825]: W0219 00:09:56.651962 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod690f4dfb_2e2f_4419_a331_3a26e0dac535.slice/crio-1ebb203d83cd1419a9bb51075f7dc1f7e80c9ce5d28778037e2e2c3fa885af24 WatchSource:0}: Error finding container 1ebb203d83cd1419a9bb51075f7dc1f7e80c9ce5d28778037e2e2c3fa885af24: Status 404 returned error can't find the container with id 1ebb203d83cd1419a9bb51075f7dc1f7e80c9ce5d28778037e2e2c3fa885af24 Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.660074 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.677057 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.697442 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.716920 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.737932 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.759524 4825 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.778213 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.800360 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.817383 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.857580 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12aeb4bf-6d3b-4c0e-a121-20c286762e7b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ctmf9\" (UID: \"12aeb4bf-6d3b-4c0e-a121-20c286762e7b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.871821 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg52t\" (UniqueName: \"kubernetes.io/projected/3c6e2361-7fb0-4e89-8747-ae1a46cb0e65-kube-api-access-gg52t\") pod \"openshift-controller-manager-operator-756b6f6bc6-r9ggh\" (UID: \"3c6e2361-7fb0-4e89-8747-ae1a46cb0e65\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.896778 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c142cb69-debd-49df-9c48-50cf5e5aa740-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-76cwz\" (UID: \"c142cb69-debd-49df-9c48-50cf5e5aa740\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.909806 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.913445 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g47g\" (UniqueName: \"kubernetes.io/projected/bda9acd8-7428-4ed2-aa1c-54c759b39e97-kube-api-access-7g47g\") pod \"router-default-5444994796-lhzt4\" (UID: \"bda9acd8-7428-4ed2-aa1c-54c759b39e97\") " pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.915358 4825 request.go:700] Waited for 1.92585981s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.937536 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ee445c4-ffb8-44f0-8cff-19563c07b525-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q5z5p\" (UID: \"2ee445c4-ffb8-44f0-8cff-19563c07b525\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.953655 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87lrx\" (UniqueName: \"kubernetes.io/projected/fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5-kube-api-access-87lrx\") pod \"apiserver-7bbb656c7d-jkb4q\" (UID: \"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.966057 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" event={"ID":"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c","Type":"ContainerStarted","Data":"0752c08aa49bca121cbc9dbdc95a72c824bfd8c31f5aab57eaa008536f8438d3"} Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.967218 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rm422" event={"ID":"b7d82a4a-3947-4645-982a-654a8101ba55","Type":"ContainerStarted","Data":"a255c0b7249e804ee733d0d774b85467e07cc1086fa73107a6dc4f856a7de2a7"} Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.968995 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" event={"ID":"f110da82-46c4-44d5-91f6-195be763d96f","Type":"ContainerStarted","Data":"86bfc99fc089e0ddaf9e2975beb842e593f60bbc8deda0da46c2b3a7213ff6e9"} Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.970381 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" event={"ID":"999a793b-4d2e-41bb-bd09-cd8ca31cef0c","Type":"ContainerStarted","Data":"1d2b259472fc1193078fe6fb35b20759453338ce4491e010e15bf1b5c9fe393f"} Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.972274 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r" event={"ID":"3da90cf7-acf9-4fa5-8a59-1f444dd5a619","Type":"ContainerStarted","Data":"601903235c890951776b6f06cfaf95498e3690fed7fa0c3540555e5706c0ce12"} Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.973149 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w7kq\" (UniqueName: \"kubernetes.io/projected/2ee445c4-ffb8-44f0-8cff-19563c07b525-kube-api-access-6w7kq\") pod \"ingress-operator-5b745b69d9-q5z5p\" (UID: \"2ee445c4-ffb8-44f0-8cff-19563c07b525\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.973203 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" event={"ID":"690f4dfb-2e2f-4419-a331-3a26e0dac535","Type":"ContainerStarted","Data":"1ebb203d83cd1419a9bb51075f7dc1f7e80c9ce5d28778037e2e2c3fa885af24"} Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.974802 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" event={"ID":"400b2fc6-03e6-48b5-9424-67ac6c34cfb1","Type":"ContainerStarted","Data":"c4275c946d6cf5d132577f8b60c47a128be6cf24300c5cb9a17da90efdf7a17d"} Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.978013 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" event={"ID":"fff3e916-b71f-44c3-a4ac-a78efc547a28","Type":"ContainerStarted","Data":"7d9382f10c20d16f8d16f470eb937adaf6184af8e9a7724e0a81c8e580d764b6"} Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.979992 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-czdg9" event={"ID":"86321d30-1e05-467f-bdac-35cefcfdd789","Type":"ContainerStarted","Data":"8bc5466e01f05973f237509b67621a1a7b216ba049cb2d6d7850888a6846e9a6"} Feb 19 00:09:56 crc kubenswrapper[4825]: I0219 00:09:56.981124 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-92549" event={"ID":"1f6724cf-dc1e-44cc-8f59-91d3e8b00970","Type":"ContainerStarted","Data":"604e7915d19799a3ba1d181bc77cd2c40940c0ea07867b332cadb9c7a65f3f0e"} Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.002270 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g79l\" (UniqueName: \"kubernetes.io/projected/f5f23c6e-7172-4c6c-87f0-8fa5edfa8248-kube-api-access-9g79l\") pod \"openshift-config-operator-7777fb866f-sms5d\" (UID: \"f5f23c6e-7172-4c6c-87f0-8fa5edfa8248\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.017242 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.020142 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltqt7\" (UniqueName: \"kubernetes.io/projected/ac81eab9-9f57-48f0-a71d-0bc6087eb8d8-kube-api-access-ltqt7\") pod \"dns-operator-744455d44c-dmlhp\" (UID: \"ac81eab9-9f57-48f0-a71d-0bc6087eb8d8\") " pod="openshift-dns-operator/dns-operator-744455d44c-dmlhp" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.037724 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.058313 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.076578 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.107020 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.108324 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.110424 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.118302 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.121659 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.137630 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.142965 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.152457 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.158476 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.177678 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.181322 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dmlhp" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.193860 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.199312 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.232195 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9"] Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.257181 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxlql\" (UniqueName: \"kubernetes.io/projected/78b30adc-193b-4784-aa76-522479b866dc-kube-api-access-zxlql\") pod \"control-plane-machine-set-operator-78cbb6b69f-qz7nq\" (UID: \"78b30adc-193b-4784-aa76-522479b866dc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qz7nq" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.273912 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj5st\" (UniqueName: \"kubernetes.io/projected/b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b-kube-api-access-jj5st\") pod \"packageserver-d55dfcdfc-qh6r7\" (UID: \"b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.276142 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vlnr\" (UniqueName: \"kubernetes.io/projected/41a74d6d-5d02-4231-abc9-3adda5eb4a1e-kube-api-access-7vlnr\") pod \"migrator-59844c95c7-zbhrb\" (UID: \"41a74d6d-5d02-4231-abc9-3adda5eb4a1e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbhrb" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.291220 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjl2m\" (UniqueName: \"kubernetes.io/projected/2e7b2089-a948-4a74-8bab-7eae32913dbd-kube-api-access-gjl2m\") pod \"kube-storage-version-migrator-operator-b67b599dd-g5gxr\" (UID: \"2e7b2089-a948-4a74-8bab-7eae32913dbd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr" Feb 19 00:09:57 crc kubenswrapper[4825]: W0219 00:09:57.303291 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12aeb4bf_6d3b_4c0e_a121_20c286762e7b.slice/crio-9fabc0722aac5a0f6b4b93a678b6f6c83f973ed7f3d06160f9dfb8bbe2c562b4 WatchSource:0}: Error finding container 9fabc0722aac5a0f6b4b93a678b6f6c83f973ed7f3d06160f9dfb8bbe2c562b4: Status 404 returned error can't find the container with id 9fabc0722aac5a0f6b4b93a678b6f6c83f973ed7f3d06160f9dfb8bbe2c562b4 Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.316333 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-279wd\" (UniqueName: \"kubernetes.io/projected/c7867337-27db-4c61-87e3-fd2f957a0381-kube-api-access-279wd\") pod \"service-ca-9c57cc56f-wfn7l\" (UID: \"c7867337-27db-4c61-87e3-fd2f957a0381\") " pod="openshift-service-ca/service-ca-9c57cc56f-wfn7l" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.319785 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh"] Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.337734 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hz55\" (UniqueName: \"kubernetes.io/projected/ac92fb7e-c51b-4cd9-951a-bd6153cfb0f0-kube-api-access-7hz55\") pod \"multus-admission-controller-857f4d67dd-f9db5\" (UID: \"ac92fb7e-c51b-4cd9-951a-bd6153cfb0f0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-f9db5" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.355901 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqrzw\" (UniqueName: \"kubernetes.io/projected/fcce0511-0fb0-48e9-927c-c259595a806b-kube-api-access-fqrzw\") pod \"olm-operator-6b444d44fb-hwqsn\" (UID: \"fcce0511-0fb0-48e9-927c-c259595a806b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.363023 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q"] Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.376016 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwqpc\" (UniqueName: \"kubernetes.io/projected/ea799468-c418-4b46-b279-cb78c37b2ce3-kube-api-access-fwqpc\") pod \"machine-config-controller-84d6567774-jsk5w\" (UID: \"ea799468-c418-4b46-b279-cb78c37b2ce3\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.389237 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sms5d"] Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.394422 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqgp2\" (UniqueName: \"kubernetes.io/projected/76110eca-3d21-477d-9656-965eaa768c21-kube-api-access-nqgp2\") pod \"marketplace-operator-79b997595-dc72t\" (UID: \"76110eca-3d21-477d-9656-965eaa768c21\") " pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.412190 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qz7nq" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.420606 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzrrx\" (UniqueName: \"kubernetes.io/projected/8334d428-f7e6-41d5-806b-b5471a354fe8-kube-api-access-rzrrx\") pod \"package-server-manager-789f6589d5-97jhp\" (UID: \"8334d428-f7e6-41d5-806b-b5471a354fe8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.421752 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbhrb" Feb 19 00:09:57 crc kubenswrapper[4825]: W0219 00:09:57.421874 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe08af25_4eb5_4cae_aada_e0b9d0bd8ff5.slice/crio-69ccfcf3c55a7662d3270a5f175109d921197e719a098ebfe7336289f77ab422 WatchSource:0}: Error finding container 69ccfcf3c55a7662d3270a5f175109d921197e719a098ebfe7336289f77ab422: Status 404 returned error can't find the container with id 69ccfcf3c55a7662d3270a5f175109d921197e719a098ebfe7336289f77ab422 Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.451011 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-f9db5" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.455395 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfsbk\" (UniqueName: \"kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-kube-api-access-mfsbk\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.455443 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-bound-sa-token\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.455531 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-config\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.455569 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/059b4f19-56a5-4dad-a957-5cc444811056-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-754pg\" (UID: \"059b4f19-56a5-4dad-a957-5cc444811056\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.455600 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43b75e6e-5b2d-4690-b500-20ad18a1e042-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.455651 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-registry-tls\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.455675 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43b75e6e-5b2d-4690-b500-20ad18a1e042-registry-certificates\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.455845 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj8bv\" (UniqueName: \"kubernetes.io/projected/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-kube-api-access-jj8bv\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.455997 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-etcd-service-ca\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.456048 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-etcd-client\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.456078 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43b75e6e-5b2d-4690-b500-20ad18a1e042-trusted-ca\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.456102 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/059b4f19-56a5-4dad-a957-5cc444811056-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-754pg\" (UID: \"059b4f19-56a5-4dad-a957-5cc444811056\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.456166 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e19e2fd2-3946-48d7-8954-121e91331caa-images\") pod \"machine-config-operator-74547568cd-fttgh\" (UID: \"e19e2fd2-3946-48d7-8954-121e91331caa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.456852 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43b75e6e-5b2d-4690-b500-20ad18a1e042-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.456885 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e19e2fd2-3946-48d7-8954-121e91331caa-proxy-tls\") pod \"machine-config-operator-74547568cd-fttgh\" (UID: \"e19e2fd2-3946-48d7-8954-121e91331caa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.456948 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-etcd-ca\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.456965 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e19e2fd2-3946-48d7-8954-121e91331caa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fttgh\" (UID: \"e19e2fd2-3946-48d7-8954-121e91331caa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.456992 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxgkx\" (UniqueName: \"kubernetes.io/projected/e19e2fd2-3946-48d7-8954-121e91331caa-kube-api-access-rxgkx\") pod \"machine-config-operator-74547568cd-fttgh\" (UID: \"e19e2fd2-3946-48d7-8954-121e91331caa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.457051 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.457099 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-serving-cert\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.457119 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/059b4f19-56a5-4dad-a957-5cc444811056-config\") pod \"kube-apiserver-operator-766d6c64bb-754pg\" (UID: \"059b4f19-56a5-4dad-a957-5cc444811056\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg" Feb 19 00:09:57 crc kubenswrapper[4825]: E0219 00:09:57.457695 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:09:57.957673233 +0000 UTC m=+143.648639350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.469261 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.502139 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.519126 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.535278 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.552170 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.564096 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wfn7l" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565102 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565275 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/059b4f19-56a5-4dad-a957-5cc444811056-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-754pg\" (UID: \"059b4f19-56a5-4dad-a957-5cc444811056\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565313 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14aa4fde-5a0f-41ce-a61e-902c0597d698-secret-volume\") pod \"collect-profiles-29524320-q6rwr\" (UID: \"14aa4fde-5a0f-41ce-a61e-902c0597d698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565338 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59q8n\" (UniqueName: \"kubernetes.io/projected/3575098a-b4e1-4b3e-b0da-b9f9867ce800-kube-api-access-59q8n\") pod \"service-ca-operator-777779d784-jzfx7\" (UID: \"3575098a-b4e1-4b3e-b0da-b9f9867ce800\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565379 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43b75e6e-5b2d-4690-b500-20ad18a1e042-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565447 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-registry-tls\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565471 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43b75e6e-5b2d-4690-b500-20ad18a1e042-registry-certificates\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565495 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vt9c\" (UniqueName: \"kubernetes.io/projected/14aa4fde-5a0f-41ce-a61e-902c0597d698-kube-api-access-8vt9c\") pod \"collect-profiles-29524320-q6rwr\" (UID: \"14aa4fde-5a0f-41ce-a61e-902c0597d698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565537 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-socket-dir\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565624 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gvcb\" (UniqueName: \"kubernetes.io/projected/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-kube-api-access-7gvcb\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565650 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj8bv\" (UniqueName: \"kubernetes.io/projected/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-kube-api-access-jj8bv\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565675 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zcq8\" (UniqueName: \"kubernetes.io/projected/113bf002-41c6-4821-83eb-db9515e73174-kube-api-access-6zcq8\") pod \"dns-default-xbfwq\" (UID: \"113bf002-41c6-4821-83eb-db9515e73174\") " pod="openshift-dns/dns-default-xbfwq" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565700 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-csi-data-dir\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: E0219 00:09:57.565780 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:09:58.06576032 +0000 UTC m=+143.756726367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565823 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/113bf002-41c6-4821-83eb-db9515e73174-config-volume\") pod \"dns-default-xbfwq\" (UID: \"113bf002-41c6-4821-83eb-db9515e73174\") " pod="openshift-dns/dns-default-xbfwq" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565880 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zck8f\" (UniqueName: \"kubernetes.io/projected/56d54501-3819-4db7-876f-d530dca3082c-kube-api-access-zck8f\") pod \"catalog-operator-68c6474976-qqkkd\" (UID: \"56d54501-3819-4db7-876f-d530dca3082c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565913 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3575098a-b4e1-4b3e-b0da-b9f9867ce800-serving-cert\") pod \"service-ca-operator-777779d784-jzfx7\" (UID: \"3575098a-b4e1-4b3e-b0da-b9f9867ce800\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565952 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/113bf002-41c6-4821-83eb-db9515e73174-metrics-tls\") pod \"dns-default-xbfwq\" (UID: \"113bf002-41c6-4821-83eb-db9515e73174\") " pod="openshift-dns/dns-default-xbfwq" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.565980 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4z4f\" (UniqueName: \"kubernetes.io/projected/1d27b12a-0f15-445d-8f5e-40f234ed6f0d-kube-api-access-g4z4f\") pod \"machine-config-server-tlj4k\" (UID: \"1d27b12a-0f15-445d-8f5e-40f234ed6f0d\") " pod="openshift-machine-config-operator/machine-config-server-tlj4k" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566032 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0f27ebb8-bc9a-433f-8e93-c521951d22ee-cert\") pod \"ingress-canary-55sxl\" (UID: \"0f27ebb8-bc9a-433f-8e93-c521951d22ee\") " pod="openshift-ingress-canary/ingress-canary-55sxl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566048 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-plugins-dir\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566086 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-etcd-service-ca\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566131 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-etcd-client\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566169 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43b75e6e-5b2d-4690-b500-20ad18a1e042-trusted-ca\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566184 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/059b4f19-56a5-4dad-a957-5cc444811056-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-754pg\" (UID: \"059b4f19-56a5-4dad-a957-5cc444811056\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566284 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-mountpoint-dir\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566311 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1d27b12a-0f15-445d-8f5e-40f234ed6f0d-node-bootstrap-token\") pod \"machine-config-server-tlj4k\" (UID: \"1d27b12a-0f15-445d-8f5e-40f234ed6f0d\") " pod="openshift-machine-config-operator/machine-config-server-tlj4k" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566336 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e19e2fd2-3946-48d7-8954-121e91331caa-images\") pod \"machine-config-operator-74547568cd-fttgh\" (UID: \"e19e2fd2-3946-48d7-8954-121e91331caa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566367 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e19e2fd2-3946-48d7-8954-121e91331caa-proxy-tls\") pod \"machine-config-operator-74547568cd-fttgh\" (UID: \"e19e2fd2-3946-48d7-8954-121e91331caa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566387 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43b75e6e-5b2d-4690-b500-20ad18a1e042-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566430 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-registration-dir\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566469 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e19e2fd2-3946-48d7-8954-121e91331caa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fttgh\" (UID: \"e19e2fd2-3946-48d7-8954-121e91331caa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566484 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/56d54501-3819-4db7-876f-d530dca3082c-srv-cert\") pod \"catalog-operator-68c6474976-qqkkd\" (UID: \"56d54501-3819-4db7-876f-d530dca3082c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566579 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-etcd-ca\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566600 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxgkx\" (UniqueName: \"kubernetes.io/projected/e19e2fd2-3946-48d7-8954-121e91331caa-kube-api-access-rxgkx\") pod \"machine-config-operator-74547568cd-fttgh\" (UID: \"e19e2fd2-3946-48d7-8954-121e91331caa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566642 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566660 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3575098a-b4e1-4b3e-b0da-b9f9867ce800-config\") pod \"service-ca-operator-777779d784-jzfx7\" (UID: \"3575098a-b4e1-4b3e-b0da-b9f9867ce800\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566691 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-serving-cert\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566710 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/059b4f19-56a5-4dad-a957-5cc444811056-config\") pod \"kube-apiserver-operator-766d6c64bb-754pg\" (UID: \"059b4f19-56a5-4dad-a957-5cc444811056\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566769 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfsbk\" (UniqueName: \"kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-kube-api-access-mfsbk\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566787 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n25wh\" (UniqueName: \"kubernetes.io/projected/0f27ebb8-bc9a-433f-8e93-c521951d22ee-kube-api-access-n25wh\") pod \"ingress-canary-55sxl\" (UID: \"0f27ebb8-bc9a-433f-8e93-c521951d22ee\") " pod="openshift-ingress-canary/ingress-canary-55sxl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566815 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1d27b12a-0f15-445d-8f5e-40f234ed6f0d-certs\") pod \"machine-config-server-tlj4k\" (UID: \"1d27b12a-0f15-445d-8f5e-40f234ed6f0d\") " pod="openshift-machine-config-operator/machine-config-server-tlj4k" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566831 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/56d54501-3819-4db7-876f-d530dca3082c-profile-collector-cert\") pod \"catalog-operator-68c6474976-qqkkd\" (UID: \"56d54501-3819-4db7-876f-d530dca3082c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566863 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-bound-sa-token\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566900 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-config\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.566918 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14aa4fde-5a0f-41ce-a61e-902c0597d698-config-volume\") pod \"collect-profiles-29524320-q6rwr\" (UID: \"14aa4fde-5a0f-41ce-a61e-902c0597d698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.570566 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43b75e6e-5b2d-4690-b500-20ad18a1e042-registry-certificates\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.570945 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-etcd-ca\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.571198 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-etcd-service-ca\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.572117 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e19e2fd2-3946-48d7-8954-121e91331caa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fttgh\" (UID: \"e19e2fd2-3946-48d7-8954-121e91331caa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" Feb 19 00:09:57 crc kubenswrapper[4825]: E0219 00:09:57.572696 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:09:58.072614613 +0000 UTC m=+143.763580660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.575798 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p"] Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.576575 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-etcd-client\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.577430 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/059b4f19-56a5-4dad-a957-5cc444811056-config\") pod \"kube-apiserver-operator-766d6c64bb-754pg\" (UID: \"059b4f19-56a5-4dad-a957-5cc444811056\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.577757 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-config\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.578095 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43b75e6e-5b2d-4690-b500-20ad18a1e042-trusted-ca\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.577123 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43b75e6e-5b2d-4690-b500-20ad18a1e042-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.579461 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-serving-cert\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.579644 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-registry-tls\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.579665 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/059b4f19-56a5-4dad-a957-5cc444811056-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-754pg\" (UID: \"059b4f19-56a5-4dad-a957-5cc444811056\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.580868 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e19e2fd2-3946-48d7-8954-121e91331caa-images\") pod \"machine-config-operator-74547568cd-fttgh\" (UID: \"e19e2fd2-3946-48d7-8954-121e91331caa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.582158 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e19e2fd2-3946-48d7-8954-121e91331caa-proxy-tls\") pod \"machine-config-operator-74547568cd-fttgh\" (UID: \"e19e2fd2-3946-48d7-8954-121e91331caa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.594157 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.594197 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43b75e6e-5b2d-4690-b500-20ad18a1e042-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.609295 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj8bv\" (UniqueName: \"kubernetes.io/projected/c092d2fc-b49d-4e98-a5ca-4bbd50f96148-kube-api-access-jj8bv\") pod \"etcd-operator-b45778765-w8x8x\" (UID: \"c092d2fc-b49d-4e98-a5ca-4bbd50f96148\") " pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.639534 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxgkx\" (UniqueName: \"kubernetes.io/projected/e19e2fd2-3946-48d7-8954-121e91331caa-kube-api-access-rxgkx\") pod \"machine-config-operator-74547568cd-fttgh\" (UID: \"e19e2fd2-3946-48d7-8954-121e91331caa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.671855 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672105 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/113bf002-41c6-4821-83eb-db9515e73174-metrics-tls\") pod \"dns-default-xbfwq\" (UID: \"113bf002-41c6-4821-83eb-db9515e73174\") " pod="openshift-dns/dns-default-xbfwq" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672149 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4z4f\" (UniqueName: \"kubernetes.io/projected/1d27b12a-0f15-445d-8f5e-40f234ed6f0d-kube-api-access-g4z4f\") pod \"machine-config-server-tlj4k\" (UID: \"1d27b12a-0f15-445d-8f5e-40f234ed6f0d\") " pod="openshift-machine-config-operator/machine-config-server-tlj4k" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672192 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0f27ebb8-bc9a-433f-8e93-c521951d22ee-cert\") pod \"ingress-canary-55sxl\" (UID: \"0f27ebb8-bc9a-433f-8e93-c521951d22ee\") " pod="openshift-ingress-canary/ingress-canary-55sxl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672218 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-plugins-dir\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672274 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-mountpoint-dir\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672299 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1d27b12a-0f15-445d-8f5e-40f234ed6f0d-node-bootstrap-token\") pod \"machine-config-server-tlj4k\" (UID: \"1d27b12a-0f15-445d-8f5e-40f234ed6f0d\") " pod="openshift-machine-config-operator/machine-config-server-tlj4k" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672340 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-registration-dir\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672365 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/56d54501-3819-4db7-876f-d530dca3082c-srv-cert\") pod \"catalog-operator-68c6474976-qqkkd\" (UID: \"56d54501-3819-4db7-876f-d530dca3082c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672418 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3575098a-b4e1-4b3e-b0da-b9f9867ce800-config\") pod \"service-ca-operator-777779d784-jzfx7\" (UID: \"3575098a-b4e1-4b3e-b0da-b9f9867ce800\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672480 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1d27b12a-0f15-445d-8f5e-40f234ed6f0d-certs\") pod \"machine-config-server-tlj4k\" (UID: \"1d27b12a-0f15-445d-8f5e-40f234ed6f0d\") " pod="openshift-machine-config-operator/machine-config-server-tlj4k" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672520 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n25wh\" (UniqueName: \"kubernetes.io/projected/0f27ebb8-bc9a-433f-8e93-c521951d22ee-kube-api-access-n25wh\") pod \"ingress-canary-55sxl\" (UID: \"0f27ebb8-bc9a-433f-8e93-c521951d22ee\") " pod="openshift-ingress-canary/ingress-canary-55sxl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672659 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/56d54501-3819-4db7-876f-d530dca3082c-profile-collector-cert\") pod \"catalog-operator-68c6474976-qqkkd\" (UID: \"56d54501-3819-4db7-876f-d530dca3082c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672703 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14aa4fde-5a0f-41ce-a61e-902c0597d698-config-volume\") pod \"collect-profiles-29524320-q6rwr\" (UID: \"14aa4fde-5a0f-41ce-a61e-902c0597d698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672737 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14aa4fde-5a0f-41ce-a61e-902c0597d698-secret-volume\") pod \"collect-profiles-29524320-q6rwr\" (UID: \"14aa4fde-5a0f-41ce-a61e-902c0597d698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672762 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59q8n\" (UniqueName: \"kubernetes.io/projected/3575098a-b4e1-4b3e-b0da-b9f9867ce800-kube-api-access-59q8n\") pod \"service-ca-operator-777779d784-jzfx7\" (UID: \"3575098a-b4e1-4b3e-b0da-b9f9867ce800\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672811 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vt9c\" (UniqueName: \"kubernetes.io/projected/14aa4fde-5a0f-41ce-a61e-902c0597d698-kube-api-access-8vt9c\") pod \"collect-profiles-29524320-q6rwr\" (UID: \"14aa4fde-5a0f-41ce-a61e-902c0597d698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672838 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-socket-dir\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672865 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gvcb\" (UniqueName: \"kubernetes.io/projected/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-kube-api-access-7gvcb\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672893 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zcq8\" (UniqueName: \"kubernetes.io/projected/113bf002-41c6-4821-83eb-db9515e73174-kube-api-access-6zcq8\") pod \"dns-default-xbfwq\" (UID: \"113bf002-41c6-4821-83eb-db9515e73174\") " pod="openshift-dns/dns-default-xbfwq" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672928 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-csi-data-dir\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672957 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/113bf002-41c6-4821-83eb-db9515e73174-config-volume\") pod \"dns-default-xbfwq\" (UID: \"113bf002-41c6-4821-83eb-db9515e73174\") " pod="openshift-dns/dns-default-xbfwq" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.672987 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zck8f\" (UniqueName: \"kubernetes.io/projected/56d54501-3819-4db7-876f-d530dca3082c-kube-api-access-zck8f\") pod \"catalog-operator-68c6474976-qqkkd\" (UID: \"56d54501-3819-4db7-876f-d530dca3082c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.673015 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3575098a-b4e1-4b3e-b0da-b9f9867ce800-serving-cert\") pod \"service-ca-operator-777779d784-jzfx7\" (UID: \"3575098a-b4e1-4b3e-b0da-b9f9867ce800\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7" Feb 19 00:09:57 crc kubenswrapper[4825]: E0219 00:09:57.674306 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:09:58.174272189 +0000 UTC m=+143.865238236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.674933 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-socket-dir\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.677962 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-csi-data-dir\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.678312 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-registration-dir\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.678371 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-plugins-dir\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.678407 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-mountpoint-dir\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.680257 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14aa4fde-5a0f-41ce-a61e-902c0597d698-config-volume\") pod \"collect-profiles-29524320-q6rwr\" (UID: \"14aa4fde-5a0f-41ce-a61e-902c0597d698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.683812 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/56d54501-3819-4db7-876f-d530dca3082c-profile-collector-cert\") pod \"catalog-operator-68c6474976-qqkkd\" (UID: \"56d54501-3819-4db7-876f-d530dca3082c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.684121 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/113bf002-41c6-4821-83eb-db9515e73174-config-volume\") pod \"dns-default-xbfwq\" (UID: \"113bf002-41c6-4821-83eb-db9515e73174\") " pod="openshift-dns/dns-default-xbfwq" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.684191 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/113bf002-41c6-4821-83eb-db9515e73174-metrics-tls\") pod \"dns-default-xbfwq\" (UID: \"113bf002-41c6-4821-83eb-db9515e73174\") " pod="openshift-dns/dns-default-xbfwq" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.684269 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3575098a-b4e1-4b3e-b0da-b9f9867ce800-config\") pod \"service-ca-operator-777779d784-jzfx7\" (UID: \"3575098a-b4e1-4b3e-b0da-b9f9867ce800\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.684327 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0f27ebb8-bc9a-433f-8e93-c521951d22ee-cert\") pod \"ingress-canary-55sxl\" (UID: \"0f27ebb8-bc9a-433f-8e93-c521951d22ee\") " pod="openshift-ingress-canary/ingress-canary-55sxl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.685883 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/56d54501-3819-4db7-876f-d530dca3082c-srv-cert\") pod \"catalog-operator-68c6474976-qqkkd\" (UID: \"56d54501-3819-4db7-876f-d530dca3082c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.686708 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfsbk\" (UniqueName: \"kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-kube-api-access-mfsbk\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.687789 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1d27b12a-0f15-445d-8f5e-40f234ed6f0d-node-bootstrap-token\") pod \"machine-config-server-tlj4k\" (UID: \"1d27b12a-0f15-445d-8f5e-40f234ed6f0d\") " pod="openshift-machine-config-operator/machine-config-server-tlj4k" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.687881 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dmlhp"] Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.688973 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1d27b12a-0f15-445d-8f5e-40f234ed6f0d-certs\") pod \"machine-config-server-tlj4k\" (UID: \"1d27b12a-0f15-445d-8f5e-40f234ed6f0d\") " pod="openshift-machine-config-operator/machine-config-server-tlj4k" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.690021 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3575098a-b4e1-4b3e-b0da-b9f9867ce800-serving-cert\") pod \"service-ca-operator-777779d784-jzfx7\" (UID: \"3575098a-b4e1-4b3e-b0da-b9f9867ce800\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.690102 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/059b4f19-56a5-4dad-a957-5cc444811056-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-754pg\" (UID: \"059b4f19-56a5-4dad-a957-5cc444811056\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.703501 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14aa4fde-5a0f-41ce-a61e-902c0597d698-secret-volume\") pod \"collect-profiles-29524320-q6rwr\" (UID: \"14aa4fde-5a0f-41ce-a61e-902c0597d698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.709586 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-bound-sa-token\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.737967 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n25wh\" (UniqueName: \"kubernetes.io/projected/0f27ebb8-bc9a-433f-8e93-c521951d22ee-kube-api-access-n25wh\") pod \"ingress-canary-55sxl\" (UID: \"0f27ebb8-bc9a-433f-8e93-c521951d22ee\") " pod="openshift-ingress-canary/ingress-canary-55sxl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.757338 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4z4f\" (UniqueName: \"kubernetes.io/projected/1d27b12a-0f15-445d-8f5e-40f234ed6f0d-kube-api-access-g4z4f\") pod \"machine-config-server-tlj4k\" (UID: \"1d27b12a-0f15-445d-8f5e-40f234ed6f0d\") " pod="openshift-machine-config-operator/machine-config-server-tlj4k" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.771556 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz"] Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.774267 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: E0219 00:09:57.774635 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:09:58.274622043 +0000 UTC m=+143.965588080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.781068 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gvcb\" (UniqueName: \"kubernetes.io/projected/0556f35e-37e8-4ae8-9bc4-32394e9f86ab-kube-api-access-7gvcb\") pod \"csi-hostpathplugin-f8slh\" (UID: \"0556f35e-37e8-4ae8-9bc4-32394e9f86ab\") " pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.797121 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zcq8\" (UniqueName: \"kubernetes.io/projected/113bf002-41c6-4821-83eb-db9515e73174-kube-api-access-6zcq8\") pod \"dns-default-xbfwq\" (UID: \"113bf002-41c6-4821-83eb-db9515e73174\") " pod="openshift-dns/dns-default-xbfwq" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.803756 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zbhrb"] Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.815096 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59q8n\" (UniqueName: \"kubernetes.io/projected/3575098a-b4e1-4b3e-b0da-b9f9867ce800-kube-api-access-59q8n\") pod \"service-ca-operator-777779d784-jzfx7\" (UID: \"3575098a-b4e1-4b3e-b0da-b9f9867ce800\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.832664 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qz7nq"] Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.835409 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vt9c\" (UniqueName: \"kubernetes.io/projected/14aa4fde-5a0f-41ce-a61e-902c0597d698-kube-api-access-8vt9c\") pod \"collect-profiles-29524320-q6rwr\" (UID: \"14aa4fde-5a0f-41ce-a61e-902c0597d698\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.837636 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.855693 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zck8f\" (UniqueName: \"kubernetes.io/projected/56d54501-3819-4db7-876f-d530dca3082c-kube-api-access-zck8f\") pod \"catalog-operator-68c6474976-qqkkd\" (UID: \"56d54501-3819-4db7-876f-d530dca3082c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.860932 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.863159 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.870637 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.876136 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:09:57 crc kubenswrapper[4825]: E0219 00:09:57.876664 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:09:58.376646022 +0000 UTC m=+144.067612069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.876796 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.893336 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-f8slh" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.903847 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.911943 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-55sxl" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.919996 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tlj4k" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.926759 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xbfwq" Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.977590 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:57 crc kubenswrapper[4825]: E0219 00:09:57.978347 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:09:58.47833469 +0000 UTC m=+144.169300737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:57 crc kubenswrapper[4825]: I0219 00:09:57.993088 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" event={"ID":"bf1c5562-2a73-410f-b7b3-fe0edab3216b","Type":"ContainerStarted","Data":"3db94d891fde5923f82a9aeba4ee65e946e659d86d9ea5698078f65fa9bb5bc1"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.006101 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r" event={"ID":"3da90cf7-acf9-4fa5-8a59-1f444dd5a619","Type":"ContainerStarted","Data":"6cda49c3673bf0d94268213bad5fc79e30aee8ef8e373ce2e8aca076df0f7f9e"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.033928 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qz7nq" event={"ID":"78b30adc-193b-4784-aa76-522479b866dc","Type":"ContainerStarted","Data":"0d5bd248e959d421a209b17be34334718a44abc5e1ae71fcc60259ed18127746"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.076977 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lhzt4" event={"ID":"bda9acd8-7428-4ed2-aa1c-54c759b39e97","Type":"ContainerStarted","Data":"26295ffedf9a573135541f01a83b1036a5b7890c59712b5f043b469038f857d0"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.077033 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-lhzt4" event={"ID":"bda9acd8-7428-4ed2-aa1c-54c759b39e97","Type":"ContainerStarted","Data":"3250be618cf7f825e2d7b1145c8aa2f2d3ee82fda6a24883adb4f90530ec9ce9"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.079521 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:09:58 crc kubenswrapper[4825]: E0219 00:09:58.080069 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:09:58.580053818 +0000 UTC m=+144.271019865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.090097 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" event={"ID":"f5f23c6e-7172-4c6c-87f0-8fa5edfa8248","Type":"ContainerStarted","Data":"c2df3820a10bd285779481bb2c3ceec674cf9fbb72096ff2fc635484d361661e"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.097209 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bf56f" event={"ID":"29be03fe-da22-41a7-9243-67aa815fbfb1","Type":"ContainerStarted","Data":"0261c888f549af7767ac3ac7f5fdc1b48ae7efd01bbe1824d2e5962a970915fe"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.097754 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bf56f" Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.099475 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-bf56f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.099559 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.106755 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" event={"ID":"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5","Type":"ContainerStarted","Data":"69ccfcf3c55a7662d3270a5f175109d921197e719a098ebfe7336289f77ab422"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.109579 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" event={"ID":"999a793b-4d2e-41bb-bd09-cd8ca31cef0c","Type":"ContainerStarted","Data":"a9bc7c63cdbde1c1abac0b2efbc28f051a0fd45b0ff403db81bbffe91bf39f5a"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.112056 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-czdg9" event={"ID":"86321d30-1e05-467f-bdac-35cefcfdd789","Type":"ContainerStarted","Data":"49cf8bb0ec80cad99f74b5037c4858c7f8c6f4e8c570fa6ef9b4c45890337f50"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.112359 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-czdg9" Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.113584 4825 patch_prober.go:28] interesting pod/console-operator-58897d9998-czdg9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.113627 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-czdg9" podUID="86321d30-1e05-467f-bdac-35cefcfdd789" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.114444 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9" event={"ID":"12aeb4bf-6d3b-4c0e-a121-20c286762e7b","Type":"ContainerStarted","Data":"9fabc0722aac5a0f6b4b93a678b6f6c83f973ed7f3d06160f9dfb8bbe2c562b4"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.116122 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" event={"ID":"690f4dfb-2e2f-4419-a331-3a26e0dac535","Type":"ContainerStarted","Data":"515a6bcdaa00829f0a039921e2da55b88ea4346d262803047c0a1544ea982191"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.129139 4825 generic.go:334] "Generic (PLEG): container finished" podID="b7d82a4a-3947-4645-982a-654a8101ba55" containerID="c6902cd40ca45406c3314e5f6404a82a0692f03b546d80e8a2589dfb12c77d4a" exitCode=0 Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.129234 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rm422" event={"ID":"b7d82a4a-3947-4645-982a-654a8101ba55","Type":"ContainerDied","Data":"c6902cd40ca45406c3314e5f6404a82a0692f03b546d80e8a2589dfb12c77d4a"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.133322 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" event={"ID":"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c","Type":"ContainerStarted","Data":"9934128c27646ec1be9fe8286ebc8225729720f00583f6b9969614f632b7d0b1"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.134854 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbhrb" event={"ID":"41a74d6d-5d02-4231-abc9-3adda5eb4a1e","Type":"ContainerStarted","Data":"98d91b70ed02c4a0d74dc6ec2629342dfba24045b81d483fd6de571dbafb6db7"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.136140 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dmlhp" event={"ID":"ac81eab9-9f57-48f0-a71d-0bc6087eb8d8","Type":"ContainerStarted","Data":"2668292eea4dc83571b42e93d2a111b2d49fd56477f0cfa4412c111153db9c1c"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.139061 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-88d6d" event={"ID":"864d9174-e828-4f4e-a143-bc3491f42aef","Type":"ContainerStarted","Data":"5df651bdec781ecd8f5277277eaccd472634339daef7527254b30dcf78b5338f"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.140901 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz" event={"ID":"c142cb69-debd-49df-9c48-50cf5e5aa740","Type":"ContainerStarted","Data":"5fad3b85efd1f01387790a6966b50e696d51cbbed3862f3ea7c05349d0d1aea2"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.143367 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" event={"ID":"fff3e916-b71f-44c3-a4ac-a78efc547a28","Type":"ContainerStarted","Data":"c1ef257889273d81d46af18b3db925a09e8995065bd63fbcaebf4958f15d9e9a"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.146052 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" event={"ID":"2ee445c4-ffb8-44f0-8cff-19563c07b525","Type":"ContainerStarted","Data":"e813bf108dc687d314bc148a6c044e5b7cb3ba73ea527ec8a598cff71cbaf358"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.223728 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:58 crc kubenswrapper[4825]: E0219 00:09:58.224265 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:09:58.724243462 +0000 UTC m=+144.415209519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.227288 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-92549" event={"ID":"1f6724cf-dc1e-44cc-8f59-91d3e8b00970","Type":"ContainerStarted","Data":"7f58400d14645c4e739dbe92941370365556986106e7e72a936013584bf28fe6"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.230857 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh" event={"ID":"3c6e2361-7fb0-4e89-8747-ae1a46cb0e65","Type":"ContainerStarted","Data":"f0b7f16b9ecbdb77a27b642c09dc9866004c7b920deae4e55b5351b52f0de8c9"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.290378 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29524320-khn5f" event={"ID":"7e0cdf1c-faf9-4a21-8beb-1b712bd266fc","Type":"ContainerStarted","Data":"42700d989a15b37e2c649b3aaa6889f8783a5666cfea1cfa99bfb58a6cdf6cd5"} Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.327905 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:09:58 crc kubenswrapper[4825]: E0219 00:09:58.329125 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:09:58.829106843 +0000 UTC m=+144.520072890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.336810 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-f9db5"] Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.346535 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn"] Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.430899 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:58 crc kubenswrapper[4825]: E0219 00:09:58.431249 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:09:58.931235785 +0000 UTC m=+144.622201832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:58 crc kubenswrapper[4825]: W0219 00:09:58.452005 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d27b12a_0f15_445d_8f5e_40f234ed6f0d.slice/crio-c42b0ab0b32974deea8cb0121a72e7165e8a9ea7b5005ff434db6a4206ecab79 WatchSource:0}: Error finding container c42b0ab0b32974deea8cb0121a72e7165e8a9ea7b5005ff434db6a4206ecab79: Status 404 returned error can't find the container with id c42b0ab0b32974deea8cb0121a72e7165e8a9ea7b5005ff434db6a4206ecab79 Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.535234 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:09:58 crc kubenswrapper[4825]: E0219 00:09:58.535756 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:09:59.035739895 +0000 UTC m=+144.726705942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.612578 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w"] Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.615072 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wfn7l"] Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.637409 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:58 crc kubenswrapper[4825]: E0219 00:09:58.638459 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:09:59.138438925 +0000 UTC m=+144.829404972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.734598 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bf56f" podStartSLOduration=122.734576381 podStartE2EDuration="2m2.734576381s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:58.716870794 +0000 UTC m=+144.407836841" watchObservedRunningTime="2026-02-19 00:09:58.734576381 +0000 UTC m=+144.425542428" Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.738309 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:09:58 crc kubenswrapper[4825]: E0219 00:09:58.738735 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:09:59.238713426 +0000 UTC m=+144.929679473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.823009 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.823062 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.853022 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:58 crc kubenswrapper[4825]: E0219 00:09:58.853970 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:09:59.353953926 +0000 UTC m=+145.044919973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:58 crc kubenswrapper[4825]: I0219 00:09:58.954379 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:09:58 crc kubenswrapper[4825]: E0219 00:09:58.958655 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:09:59.458631871 +0000 UTC m=+145.149597918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.075782 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:59 crc kubenswrapper[4825]: E0219 00:09:59.076366 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:09:59.576348461 +0000 UTC m=+145.267314508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.086906 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh"] Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.087920 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dc72t"] Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.097177 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp"] Feb 19 00:09:59 crc kubenswrapper[4825]: W0219 00:09:59.101499 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode19e2fd2_3946_48d7_8954_121e91331caa.slice/crio-8282b5594e10971d760fd579fced25a57886c04280470ee66672e0025c0a2bb8 WatchSource:0}: Error finding container 8282b5594e10971d760fd579fced25a57886c04280470ee66672e0025c0a2bb8: Status 404 returned error can't find the container with id 8282b5594e10971d760fd579fced25a57886c04280470ee66672e0025c0a2bb8 Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.102943 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7"] Feb 19 00:09:59 crc kubenswrapper[4825]: W0219 00:09:59.107865 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8334d428_f7e6_41d5_806b_b5471a354fe8.slice/crio-6a008543f4a0d59f72a487d27348d9df2fbb865fa3aeb524cbcca224ea226b85 WatchSource:0}: Error finding container 6a008543f4a0d59f72a487d27348d9df2fbb865fa3aeb524cbcca224ea226b85: Status 404 returned error can't find the container with id 6a008543f4a0d59f72a487d27348d9df2fbb865fa3aeb524cbcca224ea226b85 Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.109018 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr"] Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.114865 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-55sxl"] Feb 19 00:09:59 crc kubenswrapper[4825]: W0219 00:09:59.119247 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb925a9fb_8e1f_4fbd_93c7_2f65dc3dc37b.slice/crio-b5b0d16b2db9b8839bf7ef2653aa90c2d903f4392b45d8b9c38f0b9c7ff48d93 WatchSource:0}: Error finding container b5b0d16b2db9b8839bf7ef2653aa90c2d903f4392b45d8b9c38f0b9c7ff48d93: Status 404 returned error can't find the container with id b5b0d16b2db9b8839bf7ef2653aa90c2d903f4392b45d8b9c38f0b9c7ff48d93 Feb 19 00:09:59 crc kubenswrapper[4825]: W0219 00:09:59.120694 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76110eca_3d21_477d_9656_965eaa768c21.slice/crio-410e6a78a1ce59fb867de82fc2e3a58502978888ef740fced45173bb0b62e706 WatchSource:0}: Error finding container 410e6a78a1ce59fb867de82fc2e3a58502978888ef740fced45173bb0b62e706: Status 404 returned error can't find the container with id 410e6a78a1ce59fb867de82fc2e3a58502978888ef740fced45173bb0b62e706 Feb 19 00:09:59 crc kubenswrapper[4825]: W0219 00:09:59.125387 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f27ebb8_bc9a_433f_8e93_c521951d22ee.slice/crio-701a05e43cd6f014aff1608077eb2bf96834ed93086e1ee211155232215ecefe WatchSource:0}: Error finding container 701a05e43cd6f014aff1608077eb2bf96834ed93086e1ee211155232215ecefe: Status 404 returned error can't find the container with id 701a05e43cd6f014aff1608077eb2bf96834ed93086e1ee211155232215ecefe Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.133452 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-f8slh"] Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.135429 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-czdg9" podStartSLOduration=123.135405808 podStartE2EDuration="2m3.135405808s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:59.132055699 +0000 UTC m=+144.823021766" watchObservedRunningTime="2026-02-19 00:09:59.135405808 +0000 UTC m=+144.826371855" Feb 19 00:09:59 crc kubenswrapper[4825]: W0219 00:09:59.157568 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0556f35e_37e8_4ae8_9bc4_32394e9f86ab.slice/crio-80d3135e9b960bb2ac1dd6b5f28ee2a6e58790d4b99bf5364c119d012909464e WatchSource:0}: Error finding container 80d3135e9b960bb2ac1dd6b5f28ee2a6e58790d4b99bf5364c119d012909464e: Status 404 returned error can't find the container with id 80d3135e9b960bb2ac1dd6b5f28ee2a6e58790d4b99bf5364c119d012909464e Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.159803 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-w8x8x"] Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.177763 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:09:59 crc kubenswrapper[4825]: E0219 00:09:59.178009 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:09:59.677983777 +0000 UTC m=+145.368949824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.178164 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:59 crc kubenswrapper[4825]: E0219 00:09:59.178514 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:09:59.678491953 +0000 UTC m=+145.369458000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.224750 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd"] Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.240544 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xbfwq"] Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.245249 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr"] Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.248200 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg"] Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.250036 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7"] Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.279110 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:09:59 crc kubenswrapper[4825]: E0219 00:09:59.279260 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:09:59.77923031 +0000 UTC m=+145.470196357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.279591 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:59 crc kubenswrapper[4825]: E0219 00:09:59.279956 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:09:59.779948894 +0000 UTC m=+145.470914941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.303513 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tlj4k" event={"ID":"1d27b12a-0f15-445d-8f5e-40f234ed6f0d","Type":"ContainerStarted","Data":"dd73fb9ba11939c0caf3ac055aedc62edf351fa7dbcce743d30a62abfc1f0e92"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.303577 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tlj4k" event={"ID":"1d27b12a-0f15-445d-8f5e-40f234ed6f0d","Type":"ContainerStarted","Data":"c42b0ab0b32974deea8cb0121a72e7165e8a9ea7b5005ff434db6a4206ecab79"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.306119 4825 generic.go:334] "Generic (PLEG): container finished" podID="fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5" containerID="e3a9c1da6d71f7f1467a8270b323c374ae988e6d67b9c6bd90f513605a8805dd" exitCode=0 Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.306294 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" event={"ID":"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5","Type":"ContainerDied","Data":"e3a9c1da6d71f7f1467a8270b323c374ae988e6d67b9c6bd90f513605a8805dd"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.310579 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp" event={"ID":"8334d428-f7e6-41d5-806b-b5471a354fe8","Type":"ContainerStarted","Data":"6a008543f4a0d59f72a487d27348d9df2fbb865fa3aeb524cbcca224ea226b85"} Feb 19 00:09:59 crc kubenswrapper[4825]: W0219 00:09:59.314886 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod059b4f19_56a5_4dad_a957_5cc444811056.slice/crio-9d155804605d45fc5d59248451f4d3fab15b0d8a23ac6b6c70f2fa19197aa731 WatchSource:0}: Error finding container 9d155804605d45fc5d59248451f4d3fab15b0d8a23ac6b6c70f2fa19197aa731: Status 404 returned error can't find the container with id 9d155804605d45fc5d59248451f4d3fab15b0d8a23ac6b6c70f2fa19197aa731 Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.316167 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w" event={"ID":"ea799468-c418-4b46-b279-cb78c37b2ce3","Type":"ContainerStarted","Data":"f0f21e16152c7acbe952e152eaef8858853c9267066e8e08e6d6a18a1c15f1f6"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.317776 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" event={"ID":"b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b","Type":"ContainerStarted","Data":"b5b0d16b2db9b8839bf7ef2653aa90c2d903f4392b45d8b9c38f0b9c7ff48d93"} Feb 19 00:09:59 crc kubenswrapper[4825]: W0219 00:09:59.318191 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3575098a_b4e1_4b3e_b0da_b9f9867ce800.slice/crio-3c48334675389db91e58808656f3e022012825d49b19d0565528ddf9e19adab7 WatchSource:0}: Error finding container 3c48334675389db91e58808656f3e022012825d49b19d0565528ddf9e19adab7: Status 404 returned error can't find the container with id 3c48334675389db91e58808656f3e022012825d49b19d0565528ddf9e19adab7 Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.319629 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh" event={"ID":"3c6e2361-7fb0-4e89-8747-ae1a46cb0e65","Type":"ContainerStarted","Data":"c68fff62115f02fd14275d85b293044d081d41150f37e0eb1acfb25ca3005f31"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.329302 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-88d6d" event={"ID":"864d9174-e828-4f4e-a143-bc3491f42aef","Type":"ContainerStarted","Data":"e44c294cb3e783eb76a4f93d435b01a55e169d469d466af3ce0ef7b5b3289ede"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.333734 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-55sxl" event={"ID":"0f27ebb8-bc9a-433f-8e93-c521951d22ee","Type":"ContainerStarted","Data":"701a05e43cd6f014aff1608077eb2bf96834ed93086e1ee211155232215ecefe"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.342172 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-f9db5" event={"ID":"ac92fb7e-c51b-4cd9-951a-bd6153cfb0f0","Type":"ContainerStarted","Data":"aba46660200e46d93460ca1e50f26ffbc7558178855a23e0fb9de8d43b674f34"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.345495 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wfn7l" event={"ID":"c7867337-27db-4c61-87e3-fd2f957a0381","Type":"ContainerStarted","Data":"9b222f8cb1491888e51a5b982728ebc86b8ca50203d39565fc387900eba6cf88"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.348526 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" event={"ID":"76110eca-3d21-477d-9656-965eaa768c21","Type":"ContainerStarted","Data":"410e6a78a1ce59fb867de82fc2e3a58502978888ef740fced45173bb0b62e706"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.351632 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" event={"ID":"f5f23c6e-7172-4c6c-87f0-8fa5edfa8248","Type":"ContainerStarted","Data":"b6a53fe5c539362bf366c47477f6021e8187949aa8fefa4b397b8f0382e478fb"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.354894 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" event={"ID":"c092d2fc-b49d-4e98-a5ca-4bbd50f96148","Type":"ContainerStarted","Data":"98ee04c4b3c13c64b42d6175fd1e44a5150682efee4a978a92f422288f9ccb05"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.358897 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" event={"ID":"400b2fc6-03e6-48b5-9424-67ac6c34cfb1","Type":"ContainerStarted","Data":"69411840d561e7831613bad4aa45f517634c9ceffaf8a03eff6ae22b0df03b80"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.361333 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" event={"ID":"2ee445c4-ffb8-44f0-8cff-19563c07b525","Type":"ContainerStarted","Data":"ced59213cc476fbc2a9ce2d1e02d6482d10e54f8fd51c99af4156359dc0f8df9"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.364304 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9" event={"ID":"12aeb4bf-6d3b-4c0e-a121-20c286762e7b","Type":"ContainerStarted","Data":"4a40ca7d8f0605208462b0b668556060d4a96cf2cdcb63d6ef0c98b654a6a56a"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.368840 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbhrb" event={"ID":"41a74d6d-5d02-4231-abc9-3adda5eb4a1e","Type":"ContainerStarted","Data":"d44baeb846a57947f9ef6e9514e6582891cde34d90e528b34bb93a9c881f65b2"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.370308 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr" event={"ID":"2e7b2089-a948-4a74-8bab-7eae32913dbd","Type":"ContainerStarted","Data":"a6a722c223ddbf7c365c089eeb0f9b806e4b48d4c0726d71719c7cc58fe1d320"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.371521 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qz7nq" event={"ID":"78b30adc-193b-4784-aa76-522479b866dc","Type":"ContainerStarted","Data":"541377dd803a18340e586ed981876a85600e72ddac1bbfa4aa415dfd161f1d63"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.372976 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" event={"ID":"e19e2fd2-3946-48d7-8954-121e91331caa","Type":"ContainerStarted","Data":"8282b5594e10971d760fd579fced25a57886c04280470ee66672e0025c0a2bb8"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.373984 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f8slh" event={"ID":"0556f35e-37e8-4ae8-9bc4-32394e9f86ab","Type":"ContainerStarted","Data":"80d3135e9b960bb2ac1dd6b5f28ee2a6e58790d4b99bf5364c119d012909464e"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.374892 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" event={"ID":"fcce0511-0fb0-48e9-927c-c259595a806b","Type":"ContainerStarted","Data":"12abd65ad5d78193b1fa4ad1edf5f061d33512e49abbc262786d9172f26daf0e"} Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.375591 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-bf56f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.375637 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.376207 4825 patch_prober.go:28] interesting pod/console-operator-58897d9998-czdg9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.376270 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-czdg9" podUID="86321d30-1e05-467f-bdac-35cefcfdd789" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.380197 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:09:59 crc kubenswrapper[4825]: E0219 00:09:59.380615 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:09:59.880597518 +0000 UTC m=+145.571563565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:59 crc kubenswrapper[4825]: W0219 00:09:59.412104 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod113bf002_41c6_4821_83eb_db9515e73174.slice/crio-95640d5e16464c0ba4ddb5f339f3b79c64c4eadc7279d1e78a04fecd0eefb053 WatchSource:0}: Error finding container 95640d5e16464c0ba4ddb5f339f3b79c64c4eadc7279d1e78a04fecd0eefb053: Status 404 returned error can't find the container with id 95640d5e16464c0ba4ddb5f339f3b79c64c4eadc7279d1e78a04fecd0eefb053 Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.455480 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" podStartSLOduration=123.455460419 podStartE2EDuration="2m3.455460419s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:59.451972246 +0000 UTC m=+145.142938293" watchObservedRunningTime="2026-02-19 00:09:59.455460419 +0000 UTC m=+145.146426466" Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.482408 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:59 crc kubenswrapper[4825]: E0219 00:09:59.484022 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:09:59.98399725 +0000 UTC m=+145.674963477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.499274 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29524320-khn5f" podStartSLOduration=123.499257839 podStartE2EDuration="2m3.499257839s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:59.49472081 +0000 UTC m=+145.185686857" watchObservedRunningTime="2026-02-19 00:09:59.499257839 +0000 UTC m=+145.190223886" Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.540770 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ctmf9" podStartSLOduration=123.540746612 podStartE2EDuration="2m3.540746612s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:59.539970786 +0000 UTC m=+145.230936853" watchObservedRunningTime="2026-02-19 00:09:59.540746612 +0000 UTC m=+145.231712659" Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.599644 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:09:59 crc kubenswrapper[4825]: E0219 00:09:59.600284 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:00.100264934 +0000 UTC m=+145.791230981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.637002 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r9ggh" podStartSLOduration=123.636977541 podStartE2EDuration="2m3.636977541s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:59.577826732 +0000 UTC m=+145.268792799" watchObservedRunningTime="2026-02-19 00:09:59.636977541 +0000 UTC m=+145.327943588" Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.654074 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-r7p5c" podStartSLOduration=123.654056378 podStartE2EDuration="2m3.654056378s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:59.653321345 +0000 UTC m=+145.344287402" watchObservedRunningTime="2026-02-19 00:09:59.654056378 +0000 UTC m=+145.345022425" Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.705931 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:59 crc kubenswrapper[4825]: E0219 00:09:59.706791 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:00.206779509 +0000 UTC m=+145.897745556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.759034 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" podStartSLOduration=123.759016933 podStartE2EDuration="2m3.759016933s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:59.732288051 +0000 UTC m=+145.423254098" watchObservedRunningTime="2026-02-19 00:09:59.759016933 +0000 UTC m=+145.449982980" Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.809972 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:09:59 crc kubenswrapper[4825]: E0219 00:09:59.810705 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:00.310689039 +0000 UTC m=+146.001655076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.848020 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-k9t86" podStartSLOduration=123.848001746 podStartE2EDuration="2m3.848001746s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:59.846762016 +0000 UTC m=+145.537728073" watchObservedRunningTime="2026-02-19 00:09:59.848001746 +0000 UTC m=+145.538967803" Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.911658 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:09:59 crc kubenswrapper[4825]: E0219 00:09:59.911988 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:00.411974993 +0000 UTC m=+146.102941030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:09:59 crc kubenswrapper[4825]: I0219 00:09:59.912888 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-pfhx8" podStartSLOduration=123.912869723 podStartE2EDuration="2m3.912869723s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:59.910316079 +0000 UTC m=+145.601282126" watchObservedRunningTime="2026-02-19 00:09:59.912869723 +0000 UTC m=+145.603835770" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.012717 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:00 crc kubenswrapper[4825]: E0219 00:10:00.012943 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:00.512908606 +0000 UTC m=+146.203874673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.013579 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:00 crc kubenswrapper[4825]: E0219 00:10:00.014047 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:00.514029683 +0000 UTC m=+146.204995730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.062376 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-92549" podStartSLOduration=124.062354589 podStartE2EDuration="2m4.062354589s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.055096382 +0000 UTC m=+145.746062449" watchObservedRunningTime="2026-02-19 00:10:00.062354589 +0000 UTC m=+145.753320636" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.117373 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:00 crc kubenswrapper[4825]: E0219 00:10:00.117855 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:00.617838969 +0000 UTC m=+146.308805016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.122645 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.127743 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.127798 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.171225 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-9lm2r" podStartSLOduration=124.17120046 podStartE2EDuration="2m4.17120046s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.170232958 +0000 UTC m=+145.861199005" watchObservedRunningTime="2026-02-19 00:10:00.17120046 +0000 UTC m=+145.862166507" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.218542 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:00 crc kubenswrapper[4825]: E0219 00:10:00.219083 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:00.719071212 +0000 UTC m=+146.410037259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.252147 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" podStartSLOduration=123.252129491 podStartE2EDuration="2m3.252129491s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.251694716 +0000 UTC m=+145.942660783" watchObservedRunningTime="2026-02-19 00:10:00.252129491 +0000 UTC m=+145.943095538" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.320229 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:00 crc kubenswrapper[4825]: E0219 00:10:00.320412 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:00.820378717 +0000 UTC m=+146.511344774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.320595 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:00 crc kubenswrapper[4825]: E0219 00:10:00.321290 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:00.821279926 +0000 UTC m=+146.512245973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.372206 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-lhzt4" podStartSLOduration=124.372186526 podStartE2EDuration="2m4.372186526s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.368817747 +0000 UTC m=+146.059783794" watchObservedRunningTime="2026-02-19 00:10:00.372186526 +0000 UTC m=+146.063152573" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.381287 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" event={"ID":"c092d2fc-b49d-4e98-a5ca-4bbd50f96148","Type":"ContainerStarted","Data":"6b4286c2d66f0c488695a8c7694daa47f5ebd0c9530517a0fe1541d1bb858cc7"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.382690 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-f9db5" event={"ID":"ac92fb7e-c51b-4cd9-951a-bd6153cfb0f0","Type":"ContainerStarted","Data":"15bc9ed69866f8ba5657056ed36a907c43daae0fdaf21ef3a0365720d16afe5a"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.384249 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" event={"ID":"999a793b-4d2e-41bb-bd09-cd8ca31cef0c","Type":"ContainerStarted","Data":"f9c65087c5fea1bd5d128a716fa64cc61111ed106fb9a51fbbb537dd935b1561"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.385472 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg" event={"ID":"059b4f19-56a5-4dad-a957-5cc444811056","Type":"ContainerStarted","Data":"e51453251799f6a309baa24926d48663d9311d5b508d4c359ed3d3f4599d2865"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.385495 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg" event={"ID":"059b4f19-56a5-4dad-a957-5cc444811056","Type":"ContainerStarted","Data":"9d155804605d45fc5d59248451f4d3fab15b0d8a23ac6b6c70f2fa19197aa731"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.387240 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" event={"ID":"e19e2fd2-3946-48d7-8954-121e91331caa","Type":"ContainerStarted","Data":"12affa7a7be8ae2d04401dc862fb79a024abaccef35e2761a9baecf07daee4f6"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.388886 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" event={"ID":"2ee445c4-ffb8-44f0-8cff-19563c07b525","Type":"ContainerStarted","Data":"73bc4ae391eb874f59568a28ae9e31ad31dee7a67d16332449f7d7acabc501e2"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.390017 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wfn7l" event={"ID":"c7867337-27db-4c61-87e3-fd2f957a0381","Type":"ContainerStarted","Data":"55130b134d600f06a87fcd8b1b331e22a239f059cd5cccf1b14d6aa05ec18748"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.391741 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp" event={"ID":"8334d428-f7e6-41d5-806b-b5471a354fe8","Type":"ContainerStarted","Data":"0cf3655947124931e49472951ae815eff6e5acfae54d9c97bcc719d50f1c02ee"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.398547 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dmlhp" event={"ID":"ac81eab9-9f57-48f0-a71d-0bc6087eb8d8","Type":"ContainerStarted","Data":"5ca49c1dcd471ed224822d396a258d6e1582615de0f68c1a5a98c9c3d6690a60"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.403203 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" event={"ID":"76110eca-3d21-477d-9656-965eaa768c21","Type":"ContainerStarted","Data":"16a29925a3fae16ba0837e4800f840c461b3c1a66b55b6d6dcb58f3e81b7a734"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.403764 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.405976 4825 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dc72t container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.406025 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" podUID="76110eca-3d21-477d-9656-965eaa768c21" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.407958 4825 generic.go:334] "Generic (PLEG): container finished" podID="f5f23c6e-7172-4c6c-87f0-8fa5edfa8248" containerID="b6a53fe5c539362bf366c47477f6021e8187949aa8fefa4b397b8f0382e478fb" exitCode=0 Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.408023 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" event={"ID":"f5f23c6e-7172-4c6c-87f0-8fa5edfa8248","Type":"ContainerDied","Data":"b6a53fe5c539362bf366c47477f6021e8187949aa8fefa4b397b8f0382e478fb"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.408068 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" event={"ID":"f5f23c6e-7172-4c6c-87f0-8fa5edfa8248","Type":"ContainerStarted","Data":"9ac6de926f209b7493634448498efc81c86220f241c3a3ae124770620680fedc"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.408301 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.411244 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-55sxl" event={"ID":"0f27ebb8-bc9a-433f-8e93-c521951d22ee","Type":"ContainerStarted","Data":"dbb807b8c459040030be70bfda06a1eac1632540254ef4dd1ae4d46b551b40eb"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.414268 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-9p52n" podStartSLOduration=123.414233979 podStartE2EDuration="2m3.414233979s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.411307073 +0000 UTC m=+146.102273120" watchObservedRunningTime="2026-02-19 00:10:00.414233979 +0000 UTC m=+146.105200026" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.415391 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w" event={"ID":"ea799468-c418-4b46-b279-cb78c37b2ce3","Type":"ContainerStarted","Data":"9ea0234bb810a7a80217016a2afae25dfc539bf22b163912195c9ec21eb4e472"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.422638 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:00 crc kubenswrapper[4825]: E0219 00:10:00.423136 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:00.923114719 +0000 UTC m=+146.614080766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.492731 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xbfwq" event={"ID":"113bf002-41c6-4821-83eb-db9515e73174","Type":"ContainerStarted","Data":"95640d5e16464c0ba4ddb5f339f3b79c64c4eadc7279d1e78a04fecd0eefb053"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.497769 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-wfn7l" podStartSLOduration=123.497749273 podStartE2EDuration="2m3.497749273s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.494960842 +0000 UTC m=+146.185926899" watchObservedRunningTime="2026-02-19 00:10:00.497749273 +0000 UTC m=+146.188715320" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.497861 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7" event={"ID":"3575098a-b4e1-4b3e-b0da-b9f9867ce800","Type":"ContainerStarted","Data":"d141e452fa7c97a7cbc9c22f1f3bf037872a75f2d9a2f6e0696869e205ea1fb2"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.497896 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7" event={"ID":"3575098a-b4e1-4b3e-b0da-b9f9867ce800","Type":"ContainerStarted","Data":"3c48334675389db91e58808656f3e022012825d49b19d0565528ddf9e19adab7"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.502981 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" event={"ID":"14aa4fde-5a0f-41ce-a61e-902c0597d698","Type":"ContainerStarted","Data":"0a514186a0294ad5d52c2b3e9593ef66b85be73ecd4bd684f68d7910302a6326"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.503021 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" event={"ID":"14aa4fde-5a0f-41ce-a61e-902c0597d698","Type":"ContainerStarted","Data":"2b79b03bf5693da7278e54a847e3fd48833ee65c43d772a2b3c18274b264c444"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.512014 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-88d6d" event={"ID":"864d9174-e828-4f4e-a143-bc3491f42aef","Type":"ContainerStarted","Data":"674c3b09fc13c1609de309e0c7a5b8ebcb57730eadbf0df008d6fc461267a07a"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.517830 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" event={"ID":"56d54501-3819-4db7-876f-d530dca3082c","Type":"ContainerStarted","Data":"f2bc3aa5f924eb114fb1c3550d7d72d1f2ac5d8fde948c99d525de2de9ad86e1"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.517910 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" event={"ID":"56d54501-3819-4db7-876f-d530dca3082c","Type":"ContainerStarted","Data":"e26a7e7010ac6f5b22eba6d83c6410e892944b9b48fd392f364eded93a039a6f"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.519055 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.519753 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-55sxl" podStartSLOduration=6.51973029 podStartE2EDuration="6.51973029s" podCreationTimestamp="2026-02-19 00:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.514670975 +0000 UTC m=+146.205637032" watchObservedRunningTime="2026-02-19 00:10:00.51973029 +0000 UTC m=+146.210696337" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.520843 4825 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qqkkd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.520897 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" podUID="56d54501-3819-4db7-876f-d530dca3082c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.523497 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz" event={"ID":"c142cb69-debd-49df-9c48-50cf5e5aa740","Type":"ContainerStarted","Data":"f272a6433db5e10c4fc28b2b1f0e52ccd203681b4e4b36f5adb8c428b2c26905"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.523911 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:00 crc kubenswrapper[4825]: E0219 00:10:00.524771 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.024754514 +0000 UTC m=+146.715720601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.525680 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" event={"ID":"fcce0511-0fb0-48e9-927c-c259595a806b","Type":"ContainerStarted","Data":"1549566c94375de7a6721392d19926edc7dcfed9a50acf0718b28a3c1b8dc9fb"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.526848 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.528351 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" event={"ID":"b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b","Type":"ContainerStarted","Data":"82465a2ade8d19f2b4aa0cedfe56459c7fd0ec306bdc793621c16a55cab366f0"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.528600 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.528920 4825 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hwqsn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.528958 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" podUID="fcce0511-0fb0-48e9-927c-c259595a806b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.532312 4825 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qh6r7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.532353 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" podUID="b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.534300 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbhrb" event={"ID":"41a74d6d-5d02-4231-abc9-3adda5eb4a1e","Type":"ContainerStarted","Data":"c989c6e6064985a17e56f3ae16b1dca33a7caec5256b869f153c7ac88f82fff9"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.538011 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr" event={"ID":"2e7b2089-a948-4a74-8bab-7eae32913dbd","Type":"ContainerStarted","Data":"478abb5048d8e608bb22ffd063a0a1605755aede1cc6502d3d473e405ea8a1d3"} Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.543409 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" podStartSLOduration=124.543389113 podStartE2EDuration="2m4.543389113s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.540729316 +0000 UTC m=+146.231695373" watchObservedRunningTime="2026-02-19 00:10:00.543389113 +0000 UTC m=+146.234355160" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.569025 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" podStartSLOduration=123.569006568 podStartE2EDuration="2m3.569006568s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.56876163 +0000 UTC m=+146.259727677" watchObservedRunningTime="2026-02-19 00:10:00.569006568 +0000 UTC m=+146.259972615" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.611105 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-76cwz" podStartSLOduration=123.611080561 podStartE2EDuration="2m3.611080561s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.609307833 +0000 UTC m=+146.300273890" watchObservedRunningTime="2026-02-19 00:10:00.611080561 +0000 UTC m=+146.302046608" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.627410 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:00 crc kubenswrapper[4825]: E0219 00:10:00.627661 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.127625661 +0000 UTC m=+146.818591708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.628383 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:00 crc kubenswrapper[4825]: E0219 00:10:00.632677 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.132661684 +0000 UTC m=+146.823627731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.692810 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qz7nq" podStartSLOduration=123.692789406 podStartE2EDuration="2m3.692789406s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.648045777 +0000 UTC m=+146.339011834" watchObservedRunningTime="2026-02-19 00:10:00.692789406 +0000 UTC m=+146.383755453" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.693387 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" podStartSLOduration=123.693382625 podStartE2EDuration="2m3.693382625s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.690267034 +0000 UTC m=+146.381233091" watchObservedRunningTime="2026-02-19 00:10:00.693382625 +0000 UTC m=+146.384348672" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.729793 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:00 crc kubenswrapper[4825]: E0219 00:10:00.730031 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.23000274 +0000 UTC m=+146.920968787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.730256 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.730314 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jzfx7" podStartSLOduration=123.73029541 podStartE2EDuration="2m3.73029541s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.728190862 +0000 UTC m=+146.419156919" watchObservedRunningTime="2026-02-19 00:10:00.73029541 +0000 UTC m=+146.421261457" Feb 19 00:10:00 crc kubenswrapper[4825]: E0219 00:10:00.730660 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.230641691 +0000 UTC m=+146.921607738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.810404 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" podStartSLOduration=124.810384623 podStartE2EDuration="2m4.810384623s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.809440392 +0000 UTC m=+146.500406459" watchObservedRunningTime="2026-02-19 00:10:00.810384623 +0000 UTC m=+146.501350660" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.811143 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" podStartSLOduration=123.811136847 podStartE2EDuration="2m3.811136847s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.771318778 +0000 UTC m=+146.462284825" watchObservedRunningTime="2026-02-19 00:10:00.811136847 +0000 UTC m=+146.502102884" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.831846 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:00 crc kubenswrapper[4825]: E0219 00:10:00.832078 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.3320448 +0000 UTC m=+147.023010857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.832215 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:00 crc kubenswrapper[4825]: E0219 00:10:00.832641 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.332631359 +0000 UTC m=+147.023597446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.850714 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-tlj4k" podStartSLOduration=6.850695047 podStartE2EDuration="6.850695047s" podCreationTimestamp="2026-02-19 00:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.847583596 +0000 UTC m=+146.538549653" watchObservedRunningTime="2026-02-19 00:10:00.850695047 +0000 UTC m=+146.541661094" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.933898 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:00 crc kubenswrapper[4825]: E0219 00:10:00.934057 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.434033477 +0000 UTC m=+147.124999524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.934277 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:00 crc kubenswrapper[4825]: E0219 00:10:00.934671 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.434660967 +0000 UTC m=+147.125627014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.940346 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g5gxr" podStartSLOduration=123.940327422 podStartE2EDuration="2m3.940327422s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.899644395 +0000 UTC m=+146.590610452" watchObservedRunningTime="2026-02-19 00:10:00.940327422 +0000 UTC m=+146.631293469" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.940942 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-88d6d" podStartSLOduration=124.940936422 podStartE2EDuration="2m4.940936422s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.938161312 +0000 UTC m=+146.629127369" watchObservedRunningTime="2026-02-19 00:10:00.940936422 +0000 UTC m=+146.631902469" Feb 19 00:10:00 crc kubenswrapper[4825]: I0219 00:10:00.972829 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" podStartSLOduration=123.972809662 podStartE2EDuration="2m3.972809662s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:00.97029822 +0000 UTC m=+146.661264277" watchObservedRunningTime="2026-02-19 00:10:00.972809662 +0000 UTC m=+146.663775709" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.035295 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:01 crc kubenswrapper[4825]: E0219 00:10:01.035527 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.535486666 +0000 UTC m=+147.226452713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.036497 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:01 crc kubenswrapper[4825]: E0219 00:10:01.037139 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.53712593 +0000 UTC m=+147.228092067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.124196 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.124277 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.139257 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:01 crc kubenswrapper[4825]: E0219 00:10:01.139624 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.639473798 +0000 UTC m=+147.330439845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.139946 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:01 crc kubenswrapper[4825]: E0219 00:10:01.140440 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.64042346 +0000 UTC m=+147.331389507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.241269 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:01 crc kubenswrapper[4825]: E0219 00:10:01.241498 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.741466406 +0000 UTC m=+147.432432453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.242163 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:01 crc kubenswrapper[4825]: E0219 00:10:01.242558 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.742543431 +0000 UTC m=+147.433509478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.343837 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:01 crc kubenswrapper[4825]: E0219 00:10:01.344237 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.844203788 +0000 UTC m=+147.535169835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.445767 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:01 crc kubenswrapper[4825]: E0219 00:10:01.446315 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:01.946290199 +0000 UTC m=+147.637256446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.543717 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dmlhp" event={"ID":"ac81eab9-9f57-48f0-a71d-0bc6087eb8d8","Type":"ContainerStarted","Data":"fa2b06dec55885e306332b501748dd37c66a1057d4a276a8bc33621842af26cf"} Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.545965 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" event={"ID":"e19e2fd2-3946-48d7-8954-121e91331caa","Type":"ContainerStarted","Data":"a11ac8bb135e4b58830f5e4afd45f271e82c7515c5034377103099adf3eaae95"} Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.547100 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:01 crc kubenswrapper[4825]: E0219 00:10:01.547236 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.047203481 +0000 UTC m=+147.738169528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.547401 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:01 crc kubenswrapper[4825]: E0219 00:10:01.547789 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.047777479 +0000 UTC m=+147.738743526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.549243 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rm422" event={"ID":"b7d82a4a-3947-4645-982a-654a8101ba55","Type":"ContainerStarted","Data":"1c0b0f3e56b39aa6529d1bb02737f085a5c1fbb647b6af5be0b3a3020afa45ba"} Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.549293 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-rm422" event={"ID":"b7d82a4a-3947-4645-982a-654a8101ba55","Type":"ContainerStarted","Data":"c90927a046c20c596df8c243c70c3ea461442fc016f14060e2c8b3ba89403dce"} Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.551151 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" event={"ID":"fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5","Type":"ContainerStarted","Data":"9ba2ce57d5585a86c8bd5db43305e72bff61d3fa4b2f1dcbe9f8930ca3034981"} Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.553483 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-f9db5" event={"ID":"ac92fb7e-c51b-4cd9-951a-bd6153cfb0f0","Type":"ContainerStarted","Data":"fba30e75de92cd19ed90b45d22f74f2cba99f6be8bed892c7f5873c77d5ff489"} Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.555529 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp" event={"ID":"8334d428-f7e6-41d5-806b-b5471a354fe8","Type":"ContainerStarted","Data":"897c2bf88430a64b2094214344f6bdede85d1f00882239a7329486b46bbd0451"} Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.555933 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.557393 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w" event={"ID":"ea799468-c418-4b46-b279-cb78c37b2ce3","Type":"ContainerStarted","Data":"6fb89c28d607a9e7702150126e633858d32ab2da19e69b520cc75c385f0147e8"} Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.560007 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xbfwq" event={"ID":"113bf002-41c6-4821-83eb-db9515e73174","Type":"ContainerStarted","Data":"90653b1baf3f5a9d7a4e6a53b9cc770ddd6f1bc267b9e489d7e33d71c0e737df"} Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.560036 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xbfwq" event={"ID":"113bf002-41c6-4821-83eb-db9515e73174","Type":"ContainerStarted","Data":"ed198314c77d708e91da4c5e19d3765602faf5fe96eeaddb1631990c7b43abcc"} Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.560051 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xbfwq" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.563024 4825 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qqkkd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.563076 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" podUID="56d54501-3819-4db7-876f-d530dca3082c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.564109 4825 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hwqsn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.564140 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" podUID="fcce0511-0fb0-48e9-927c-c259595a806b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.564158 4825 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dc72t container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.564200 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" podUID="76110eca-3d21-477d-9656-965eaa768c21" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.564663 4825 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qh6r7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.564696 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" podUID="b925a9fb-8e1f-4fbd-93c7-2f65dc3dc37b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.570924 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-dmlhp" podStartSLOduration=125.570906704 podStartE2EDuration="2m5.570906704s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:01.568069931 +0000 UTC m=+147.259035968" watchObservedRunningTime="2026-02-19 00:10:01.570906704 +0000 UTC m=+147.261872751" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.639991 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zbhrb" podStartSLOduration=124.639965137 podStartE2EDuration="2m4.639965137s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:01.636029919 +0000 UTC m=+147.326995966" watchObservedRunningTime="2026-02-19 00:10:01.639965137 +0000 UTC m=+147.330931194" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.648548 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:01 crc kubenswrapper[4825]: E0219 00:10:01.650406 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.150377847 +0000 UTC m=+147.841343894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.677715 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fttgh" podStartSLOduration=124.677691758 podStartE2EDuration="2m4.677691758s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:01.667896268 +0000 UTC m=+147.358862335" watchObservedRunningTime="2026-02-19 00:10:01.677691758 +0000 UTC m=+147.368657805" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.710923 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-rm422" podStartSLOduration=125.71089816 podStartE2EDuration="2m5.71089816s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:01.706764296 +0000 UTC m=+147.397730363" watchObservedRunningTime="2026-02-19 00:10:01.71089816 +0000 UTC m=+147.401864207" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.733409 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jsk5w" podStartSLOduration=124.733389094 podStartE2EDuration="2m4.733389094s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:01.731613317 +0000 UTC m=+147.422579384" watchObservedRunningTime="2026-02-19 00:10:01.733389094 +0000 UTC m=+147.424355141" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.751379 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:01 crc kubenswrapper[4825]: E0219 00:10:01.752097 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.252075004 +0000 UTC m=+147.943041051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.762940 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xbfwq" podStartSLOduration=7.762911578 podStartE2EDuration="7.762911578s" podCreationTimestamp="2026-02-19 00:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:01.7623633 +0000 UTC m=+147.453329357" watchObservedRunningTime="2026-02-19 00:10:01.762911578 +0000 UTC m=+147.453877625" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.784296 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" podStartSLOduration=124.784274925 podStartE2EDuration="2m4.784274925s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:01.783087736 +0000 UTC m=+147.474053783" watchObservedRunningTime="2026-02-19 00:10:01.784274925 +0000 UTC m=+147.475240962" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.803975 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-w8x8x" podStartSLOduration=125.803956457 podStartE2EDuration="2m5.803956457s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:01.803622706 +0000 UTC m=+147.494588763" watchObservedRunningTime="2026-02-19 00:10:01.803956457 +0000 UTC m=+147.494922514" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.834698 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-f9db5" podStartSLOduration=124.83468309 podStartE2EDuration="2m4.83468309s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:01.833451069 +0000 UTC m=+147.524417116" watchObservedRunningTime="2026-02-19 00:10:01.83468309 +0000 UTC m=+147.525649137" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.853113 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:01 crc kubenswrapper[4825]: E0219 00:10:01.853494 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.353473602 +0000 UTC m=+148.044439649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.858282 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp" podStartSLOduration=124.858270399 podStartE2EDuration="2m4.858270399s" podCreationTimestamp="2026-02-19 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:01.858112994 +0000 UTC m=+147.549079041" watchObservedRunningTime="2026-02-19 00:10:01.858270399 +0000 UTC m=+147.549236446" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.885767 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-754pg" podStartSLOduration=125.885748286 podStartE2EDuration="2m5.885748286s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:01.882987905 +0000 UTC m=+147.573953952" watchObservedRunningTime="2026-02-19 00:10:01.885748286 +0000 UTC m=+147.576714333" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.915313 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q5z5p" podStartSLOduration=125.915281098 podStartE2EDuration="2m5.915281098s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:01.91195329 +0000 UTC m=+147.602919347" watchObservedRunningTime="2026-02-19 00:10:01.915281098 +0000 UTC m=+147.606247145" Feb 19 00:10:01 crc kubenswrapper[4825]: I0219 00:10:01.960870 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:01 crc kubenswrapper[4825]: E0219 00:10:01.961395 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.461370743 +0000 UTC m=+148.152336790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.063365 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:02 crc kubenswrapper[4825]: E0219 00:10:02.063734 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.563718151 +0000 UTC m=+148.254684198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.111577 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.112058 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.113889 4825 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-jkb4q container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.13:8443/livez\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.113937 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" podUID="fe08af25-4eb5-4cae-aada-e0b9d0bd8ff5" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.13:8443/livez\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.125580 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:10:02 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Feb 19 00:10:02 crc kubenswrapper[4825]: [+]process-running ok Feb 19 00:10:02 crc kubenswrapper[4825]: healthz check failed Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.125665 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.165307 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:02 crc kubenswrapper[4825]: E0219 00:10:02.165661 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.665646927 +0000 UTC m=+148.356612974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.266211 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:02 crc kubenswrapper[4825]: E0219 00:10:02.266380 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.766352112 +0000 UTC m=+148.457318159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.266574 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:02 crc kubenswrapper[4825]: E0219 00:10:02.266930 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.76691201 +0000 UTC m=+148.457878057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.367210 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:02 crc kubenswrapper[4825]: E0219 00:10:02.367432 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.867407289 +0000 UTC m=+148.558373336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.367694 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:02 crc kubenswrapper[4825]: E0219 00:10:02.368148 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.868137153 +0000 UTC m=+148.559103200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.469026 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:02 crc kubenswrapper[4825]: E0219 00:10:02.469230 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.969187749 +0000 UTC m=+148.660153796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.469882 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:02 crc kubenswrapper[4825]: E0219 00:10:02.470253 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:02.970243913 +0000 UTC m=+148.661209960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.566326 4825 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hwqsn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.566643 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" podUID="fcce0511-0fb0-48e9-927c-c259595a806b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.566340 4825 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qqkkd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.566739 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" podUID="56d54501-3819-4db7-876f-d530dca3082c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.568229 4825 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dc72t container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.568339 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" podUID="76110eca-3d21-477d-9656-965eaa768c21" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.571107 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:02 crc kubenswrapper[4825]: E0219 00:10:02.571697 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:03.071664913 +0000 UTC m=+148.762630960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.571933 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:02 crc kubenswrapper[4825]: E0219 00:10:02.573058 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:03.073033467 +0000 UTC m=+148.763999514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.620884 4825 csr.go:261] certificate signing request csr-pjpr5 is approved, waiting to be issued Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.629563 4825 csr.go:257] certificate signing request csr-pjpr5 is issued Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.673159 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:02 crc kubenswrapper[4825]: E0219 00:10:02.674352 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:03.174333042 +0000 UTC m=+148.865299089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.775764 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:02 crc kubenswrapper[4825]: E0219 00:10:02.776207 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:03.276191725 +0000 UTC m=+148.967157772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.877420 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.877622 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:10:02 crc kubenswrapper[4825]: E0219 00:10:02.877685 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:03.377648325 +0000 UTC m=+149.068614372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.877756 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.877851 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.877969 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.884982 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.886352 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.887032 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.889311 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:10:02 crc kubenswrapper[4825]: I0219 00:10:02.980404 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:02 crc kubenswrapper[4825]: E0219 00:10:02.980865 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:03.480847442 +0000 UTC m=+149.171813489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.080246 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.081374 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:03 crc kubenswrapper[4825]: E0219 00:10:03.081590 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:03.581564908 +0000 UTC m=+149.272530955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.081812 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:03 crc kubenswrapper[4825]: E0219 00:10:03.082215 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:03.582204338 +0000 UTC m=+149.273170385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.086890 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.093187 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.129673 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:10:03 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Feb 19 00:10:03 crc kubenswrapper[4825]: [+]process-running ok Feb 19 00:10:03 crc kubenswrapper[4825]: healthz check failed Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.129737 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.146471 4825 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sms5d container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.146549 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" podUID="f5f23c6e-7172-4c6c-87f0-8fa5edfa8248" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.146648 4825 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-sms5d container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.146669 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" podUID="f5f23c6e-7172-4c6c-87f0-8fa5edfa8248" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.183248 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:03 crc kubenswrapper[4825]: E0219 00:10:03.183696 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:03.683665578 +0000 UTC m=+149.374631625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.284906 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:03 crc kubenswrapper[4825]: E0219 00:10:03.285382 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:03.785364346 +0000 UTC m=+149.476330393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.386018 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:03 crc kubenswrapper[4825]: E0219 00:10:03.386241 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:03.886210776 +0000 UTC m=+149.577176823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.386672 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:03 crc kubenswrapper[4825]: E0219 00:10:03.387092 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:03.887077155 +0000 UTC m=+149.578043202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.417937 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.418950 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.421378 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.423570 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.438287 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.487474 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:03 crc kubenswrapper[4825]: E0219 00:10:03.487711 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:03.987679447 +0000 UTC m=+149.678645494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.487744 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10032c2d-c770-40c7-9d13-15b1de8a4257-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"10032c2d-c770-40c7-9d13-15b1de8a4257\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.487940 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.487995 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10032c2d-c770-40c7-9d13-15b1de8a4257-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"10032c2d-c770-40c7-9d13-15b1de8a4257\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:10:03 crc kubenswrapper[4825]: E0219 00:10:03.488404 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:03.98839608 +0000 UTC m=+149.679362117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.582177 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f8slh" event={"ID":"0556f35e-37e8-4ae8-9bc4-32394e9f86ab","Type":"ContainerStarted","Data":"a6c263089688d9e4e8b1054f46a99de894755b89f08a9e7b70ca54fe3acabf44"} Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.589067 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.589300 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10032c2d-c770-40c7-9d13-15b1de8a4257-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"10032c2d-c770-40c7-9d13-15b1de8a4257\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.589347 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10032c2d-c770-40c7-9d13-15b1de8a4257-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"10032c2d-c770-40c7-9d13-15b1de8a4257\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:10:03 crc kubenswrapper[4825]: E0219 00:10:03.590090 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:04.090074777 +0000 UTC m=+149.781040824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.590122 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10032c2d-c770-40c7-9d13-15b1de8a4257-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"10032c2d-c770-40c7-9d13-15b1de8a4257\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.632605 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-19 00:05:02 +0000 UTC, rotation deadline is 2026-12-15 11:24:18.056280684 +0000 UTC Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.632648 4825 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7187h14m14.423636239s for next certificate rotation Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.639241 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10032c2d-c770-40c7-9d13-15b1de8a4257-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"10032c2d-c770-40c7-9d13-15b1de8a4257\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.693134 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:03 crc kubenswrapper[4825]: E0219 00:10:03.695339 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:04.195326821 +0000 UTC m=+149.886292868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.794811 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:03 crc kubenswrapper[4825]: E0219 00:10:03.795149 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:04.295117486 +0000 UTC m=+149.986083533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.795424 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:03 crc kubenswrapper[4825]: E0219 00:10:03.795780 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:04.295767768 +0000 UTC m=+149.986733815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.805191 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:10:03 crc kubenswrapper[4825]: W0219 00:10:03.807547 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-2414a9697be91c004f1d4ed692be14428f4308c623eeee3c68d904086d4132f5 WatchSource:0}: Error finding container 2414a9697be91c004f1d4ed692be14428f4308c623eeee3c68d904086d4132f5: Status 404 returned error can't find the container with id 2414a9697be91c004f1d4ed692be14428f4308c623eeee3c68d904086d4132f5 Feb 19 00:10:03 crc kubenswrapper[4825]: I0219 00:10:03.897965 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:03 crc kubenswrapper[4825]: E0219 00:10:03.898298 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:04.398282452 +0000 UTC m=+150.089248499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.003266 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:04 crc kubenswrapper[4825]: E0219 00:10:04.004426 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:04.504413334 +0000 UTC m=+150.195379381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.105264 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:04 crc kubenswrapper[4825]: E0219 00:10:04.105484 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:04.605455591 +0000 UTC m=+150.296421638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.105988 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:04 crc kubenswrapper[4825]: E0219 00:10:04.106368 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:04.60635405 +0000 UTC m=+150.297320097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.126031 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:10:04 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Feb 19 00:10:04 crc kubenswrapper[4825]: [+]process-running ok Feb 19 00:10:04 crc kubenswrapper[4825]: healthz check failed Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.126093 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.207749 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:04 crc kubenswrapper[4825]: E0219 00:10:04.208040 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:04.708012817 +0000 UTC m=+150.398978864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.208344 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:04 crc kubenswrapper[4825]: E0219 00:10:04.208738 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:04.70872558 +0000 UTC m=+150.399691617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.309149 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:04 crc kubenswrapper[4825]: E0219 00:10:04.309489 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:04.809473897 +0000 UTC m=+150.500439944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.415522 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:04 crc kubenswrapper[4825]: E0219 00:10:04.416152 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:04.916137357 +0000 UTC m=+150.607103404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.517305 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:04 crc kubenswrapper[4825]: E0219 00:10:04.517844 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:05.017823844 +0000 UTC m=+150.708789891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.587397 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4427402dfcc092b205b96abc885e465d22bf79bc88fbf7ddf162c7e5679db251"} Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.587444 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"24f417ac51c24770ba89c5d96557f75f5f3587c1d7d1d2bf8781d3c41e4a2350"} Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.588374 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.590192 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"479ee1be64f80a3c00689ff7cfb6291c8b36ecaa774b904077eef442f35c8581"} Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.590219 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ad1efeff304e4662425758d12308d68808b8687642885acf30f0091e055b3549"} Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.592455 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4ca18aac15a6f402cc8d7ba28e08f2848ad09fb19891538195397f005028f9cd"} Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.592479 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2414a9697be91c004f1d4ed692be14428f4308c623eeee3c68d904086d4132f5"} Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.614138 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.618477 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:04 crc kubenswrapper[4825]: E0219 00:10:04.618782 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:05.118771667 +0000 UTC m=+150.809737714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:04 crc kubenswrapper[4825]: W0219 00:10:04.646515 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod10032c2d_c770_40c7_9d13_15b1de8a4257.slice/crio-b14cd3e1cea137e9272d439dafe040592b03eda8c46d0059434199691fc7d749 WatchSource:0}: Error finding container b14cd3e1cea137e9272d439dafe040592b03eda8c46d0059434199691fc7d749: Status 404 returned error can't find the container with id b14cd3e1cea137e9272d439dafe040592b03eda8c46d0059434199691fc7d749 Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.719643 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:04 crc kubenswrapper[4825]: E0219 00:10:04.719967 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:05.219935717 +0000 UTC m=+150.910901764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.720182 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:04 crc kubenswrapper[4825]: E0219 00:10:04.721136 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:05.221117516 +0000 UTC m=+150.912083563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.821290 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:04 crc kubenswrapper[4825]: E0219 00:10:04.821679 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:05.321660077 +0000 UTC m=+151.012626124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.925400 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:04 crc kubenswrapper[4825]: E0219 00:10:04.925758 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:05.425742352 +0000 UTC m=+151.116708399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.928982 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:10:04 crc kubenswrapper[4825]: I0219 00:10:04.952952 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.035739 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:05 crc kubenswrapper[4825]: E0219 00:10:05.037245 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:05.537223369 +0000 UTC m=+151.228189416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.043756 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-bf56f container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.043813 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.043991 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-bf56f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.044041 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.128602 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:10:05 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Feb 19 00:10:05 crc kubenswrapper[4825]: [+]process-running ok Feb 19 00:10:05 crc kubenswrapper[4825]: healthz check failed Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.128664 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.138369 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:05 crc kubenswrapper[4825]: E0219 00:10:05.138766 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:05.638751961 +0000 UTC m=+151.329718008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.239886 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:05 crc kubenswrapper[4825]: E0219 00:10:05.240298 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:05.740278593 +0000 UTC m=+151.431244640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.341599 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:05 crc kubenswrapper[4825]: E0219 00:10:05.341953 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:05.841938641 +0000 UTC m=+151.532904688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.442578 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:05 crc kubenswrapper[4825]: E0219 00:10:05.442874 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:05.942837292 +0000 UTC m=+151.633803329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.443144 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:05 crc kubenswrapper[4825]: E0219 00:10:05.443488 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:05.943472183 +0000 UTC m=+151.634438230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.544531 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:05 crc kubenswrapper[4825]: E0219 00:10:05.544904 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:06.044889171 +0000 UTC m=+151.735855218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.599234 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"10032c2d-c770-40c7-9d13-15b1de8a4257","Type":"ContainerStarted","Data":"9501d2ac163881366bb350b395a593ddeec886a4bdb6cc16d5b90528e909af89"} Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.599274 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"10032c2d-c770-40c7-9d13-15b1de8a4257","Type":"ContainerStarted","Data":"b14cd3e1cea137e9272d439dafe040592b03eda8c46d0059434199691fc7d749"} Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.626130 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.626112171 podStartE2EDuration="2.626112171s" podCreationTimestamp="2026-02-19 00:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:05.622474253 +0000 UTC m=+151.313440300" watchObservedRunningTime="2026-02-19 00:10:05.626112171 +0000 UTC m=+151.317078218" Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.646037 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:05 crc kubenswrapper[4825]: E0219 00:10:05.646395 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:06.146382482 +0000 UTC m=+151.837348529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.655063 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.664912 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.747459 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:05 crc kubenswrapper[4825]: E0219 00:10:05.747662 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:06.247633585 +0000 UTC m=+151.938599632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.748002 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:05 crc kubenswrapper[4825]: E0219 00:10:05.749765 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:06.249755195 +0000 UTC m=+151.940721242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.769032 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.770015 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.771748 4825 patch_prober.go:28] interesting pod/apiserver-76f77b778f-rm422 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.12:8443/livez\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.771791 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-rm422" podUID="b7d82a4a-3947-4645-982a-654a8101ba55" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.12:8443/livez\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.790571 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-czdg9" Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.851187 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:05 crc kubenswrapper[4825]: E0219 00:10:05.852004 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:06.35197495 +0000 UTC m=+152.042941027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.952882 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:05 crc kubenswrapper[4825]: E0219 00:10:05.953992 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:06.453974087 +0000 UTC m=+152.144940134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.983793 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-92549" Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.983840 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-92549" Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.985678 4825 patch_prober.go:28] interesting pod/console-f9d7485db-92549 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.985817 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-92549" podUID="1f6724cf-dc1e-44cc-8f59-91d3e8b00970" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.992267 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zgq6r"] Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.993375 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:10:05 crc kubenswrapper[4825]: I0219 00:10:05.996735 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.028429 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zgq6r"] Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.056557 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:06 crc kubenswrapper[4825]: E0219 00:10:06.057665 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:06.557632529 +0000 UTC m=+152.248598576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.092572 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.118321 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.135015 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:10:06 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Feb 19 00:10:06 crc kubenswrapper[4825]: [+]process-running ok Feb 19 00:10:06 crc kubenswrapper[4825]: healthz check failed Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.135088 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.157959 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g69f\" (UniqueName: \"kubernetes.io/projected/d7139b54-5e59-487a-bbf3-2ac657e5e39d-kube-api-access-6g69f\") pod \"community-operators-zgq6r\" (UID: \"d7139b54-5e59-487a-bbf3-2ac657e5e39d\") " pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.158099 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7139b54-5e59-487a-bbf3-2ac657e5e39d-utilities\") pod \"community-operators-zgq6r\" (UID: \"d7139b54-5e59-487a-bbf3-2ac657e5e39d\") " pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.158135 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.158158 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7139b54-5e59-487a-bbf3-2ac657e5e39d-catalog-content\") pod \"community-operators-zgq6r\" (UID: \"d7139b54-5e59-487a-bbf3-2ac657e5e39d\") " pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:10:06 crc kubenswrapper[4825]: E0219 00:10:06.159426 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:06.659411449 +0000 UTC m=+152.350377496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.175755 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8xs8g"] Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.176872 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.184833 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sms5d" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.184990 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.188289 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xs8g"] Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.260047 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.260307 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7139b54-5e59-487a-bbf3-2ac657e5e39d-utilities\") pod \"community-operators-zgq6r\" (UID: \"d7139b54-5e59-487a-bbf3-2ac657e5e39d\") " pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.260348 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7139b54-5e59-487a-bbf3-2ac657e5e39d-catalog-content\") pod \"community-operators-zgq6r\" (UID: \"d7139b54-5e59-487a-bbf3-2ac657e5e39d\") " pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.260374 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3e4f27-2ef4-4187-911b-135249a4454f-catalog-content\") pod \"certified-operators-8xs8g\" (UID: \"fe3e4f27-2ef4-4187-911b-135249a4454f\") " pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.260394 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g69f\" (UniqueName: \"kubernetes.io/projected/d7139b54-5e59-487a-bbf3-2ac657e5e39d-kube-api-access-6g69f\") pod \"community-operators-zgq6r\" (UID: \"d7139b54-5e59-487a-bbf3-2ac657e5e39d\") " pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.260419 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3e4f27-2ef4-4187-911b-135249a4454f-utilities\") pod \"certified-operators-8xs8g\" (UID: \"fe3e4f27-2ef4-4187-911b-135249a4454f\") " pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.260457 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg4nd\" (UniqueName: \"kubernetes.io/projected/fe3e4f27-2ef4-4187-911b-135249a4454f-kube-api-access-fg4nd\") pod \"certified-operators-8xs8g\" (UID: \"fe3e4f27-2ef4-4187-911b-135249a4454f\") " pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:10:06 crc kubenswrapper[4825]: E0219 00:10:06.260704 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:06.760675603 +0000 UTC m=+152.451641700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.261134 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7139b54-5e59-487a-bbf3-2ac657e5e39d-utilities\") pod \"community-operators-zgq6r\" (UID: \"d7139b54-5e59-487a-bbf3-2ac657e5e39d\") " pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.261361 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7139b54-5e59-487a-bbf3-2ac657e5e39d-catalog-content\") pod \"community-operators-zgq6r\" (UID: \"d7139b54-5e59-487a-bbf3-2ac657e5e39d\") " pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.302182 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g69f\" (UniqueName: \"kubernetes.io/projected/d7139b54-5e59-487a-bbf3-2ac657e5e39d-kube-api-access-6g69f\") pod \"community-operators-zgq6r\" (UID: \"d7139b54-5e59-487a-bbf3-2ac657e5e39d\") " pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.344881 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-psl6h"] Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.346851 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.362276 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.362593 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3e4f27-2ef4-4187-911b-135249a4454f-catalog-content\") pod \"certified-operators-8xs8g\" (UID: \"fe3e4f27-2ef4-4187-911b-135249a4454f\") " pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.362685 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3e4f27-2ef4-4187-911b-135249a4454f-utilities\") pod \"certified-operators-8xs8g\" (UID: \"fe3e4f27-2ef4-4187-911b-135249a4454f\") " pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.362803 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg4nd\" (UniqueName: \"kubernetes.io/projected/fe3e4f27-2ef4-4187-911b-135249a4454f-kube-api-access-fg4nd\") pod \"certified-operators-8xs8g\" (UID: \"fe3e4f27-2ef4-4187-911b-135249a4454f\") " pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:10:06 crc kubenswrapper[4825]: E0219 00:10:06.362955 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:06.862929119 +0000 UTC m=+152.553895166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.363033 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3e4f27-2ef4-4187-911b-135249a4454f-catalog-content\") pod \"certified-operators-8xs8g\" (UID: \"fe3e4f27-2ef4-4187-911b-135249a4454f\") " pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.363255 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3e4f27-2ef4-4187-911b-135249a4454f-utilities\") pod \"certified-operators-8xs8g\" (UID: \"fe3e4f27-2ef4-4187-911b-135249a4454f\") " pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.364826 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.371288 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-psl6h"] Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.395160 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg4nd\" (UniqueName: \"kubernetes.io/projected/fe3e4f27-2ef4-4187-911b-135249a4454f-kube-api-access-fg4nd\") pod \"certified-operators-8xs8g\" (UID: \"fe3e4f27-2ef4-4187-911b-135249a4454f\") " pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.464580 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:06 crc kubenswrapper[4825]: E0219 00:10:06.464592 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:06.964488302 +0000 UTC m=+152.655454349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.464802 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f1d2aa-1887-4322-a053-12e950fa2250-utilities\") pod \"community-operators-psl6h\" (UID: \"87f1d2aa-1887-4322-a053-12e950fa2250\") " pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.464859 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f1d2aa-1887-4322-a053-12e950fa2250-catalog-content\") pod \"community-operators-psl6h\" (UID: \"87f1d2aa-1887-4322-a053-12e950fa2250\") " pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.464882 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs2bq\" (UniqueName: \"kubernetes.io/projected/87f1d2aa-1887-4322-a053-12e950fa2250-kube-api-access-gs2bq\") pod \"community-operators-psl6h\" (UID: \"87f1d2aa-1887-4322-a053-12e950fa2250\") " pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.464944 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:06 crc kubenswrapper[4825]: E0219 00:10:06.465412 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:06.965393982 +0000 UTC m=+152.656360029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.519153 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.564408 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hl5qz"] Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.565542 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.565797 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.566044 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f1d2aa-1887-4322-a053-12e950fa2250-utilities\") pod \"community-operators-psl6h\" (UID: \"87f1d2aa-1887-4322-a053-12e950fa2250\") " pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.566102 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f1d2aa-1887-4322-a053-12e950fa2250-catalog-content\") pod \"community-operators-psl6h\" (UID: \"87f1d2aa-1887-4322-a053-12e950fa2250\") " pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.566132 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs2bq\" (UniqueName: \"kubernetes.io/projected/87f1d2aa-1887-4322-a053-12e950fa2250-kube-api-access-gs2bq\") pod \"community-operators-psl6h\" (UID: \"87f1d2aa-1887-4322-a053-12e950fa2250\") " pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.566841 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f1d2aa-1887-4322-a053-12e950fa2250-utilities\") pod \"community-operators-psl6h\" (UID: \"87f1d2aa-1887-4322-a053-12e950fa2250\") " pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:10:06 crc kubenswrapper[4825]: E0219 00:10:06.566965 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:07.066946415 +0000 UTC m=+152.757912462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.566994 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f1d2aa-1887-4322-a053-12e950fa2250-catalog-content\") pod \"community-operators-psl6h\" (UID: \"87f1d2aa-1887-4322-a053-12e950fa2250\") " pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.586973 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hl5qz"] Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.614250 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs2bq\" (UniqueName: \"kubernetes.io/projected/87f1d2aa-1887-4322-a053-12e950fa2250-kube-api-access-gs2bq\") pod \"community-operators-psl6h\" (UID: \"87f1d2aa-1887-4322-a053-12e950fa2250\") " pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.625369 4825 generic.go:334] "Generic (PLEG): container finished" podID="10032c2d-c770-40c7-9d13-15b1de8a4257" containerID="9501d2ac163881366bb350b395a593ddeec886a4bdb6cc16d5b90528e909af89" exitCode=0 Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.625697 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"10032c2d-c770-40c7-9d13-15b1de8a4257","Type":"ContainerDied","Data":"9501d2ac163881366bb350b395a593ddeec886a4bdb6cc16d5b90528e909af89"} Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.661269 4825 generic.go:334] "Generic (PLEG): container finished" podID="14aa4fde-5a0f-41ce-a61e-902c0597d698" containerID="0a514186a0294ad5d52c2b3e9593ef66b85be73ecd4bd684f68d7910302a6326" exitCode=0 Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.662020 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" event={"ID":"14aa4fde-5a0f-41ce-a61e-902c0597d698","Type":"ContainerDied","Data":"0a514186a0294ad5d52c2b3e9593ef66b85be73ecd4bd684f68d7910302a6326"} Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.669368 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpbsh\" (UniqueName: \"kubernetes.io/projected/bee79dcb-9870-4633-9f5b-56d2177c2616-kube-api-access-wpbsh\") pod \"certified-operators-hl5qz\" (UID: \"bee79dcb-9870-4633-9f5b-56d2177c2616\") " pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.674680 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.674725 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bee79dcb-9870-4633-9f5b-56d2177c2616-catalog-content\") pod \"certified-operators-hl5qz\" (UID: \"bee79dcb-9870-4633-9f5b-56d2177c2616\") " pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.674791 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bee79dcb-9870-4633-9f5b-56d2177c2616-utilities\") pod \"certified-operators-hl5qz\" (UID: \"bee79dcb-9870-4633-9f5b-56d2177c2616\") " pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.671406 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:10:06 crc kubenswrapper[4825]: E0219 00:10:06.675639 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:07.175619061 +0000 UTC m=+152.866585108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.780591 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zgq6r"] Feb 19 00:10:06 crc kubenswrapper[4825]: E0219 00:10:06.781041 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:07.281024109 +0000 UTC m=+152.971990156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.780945 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.781402 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpbsh\" (UniqueName: \"kubernetes.io/projected/bee79dcb-9870-4633-9f5b-56d2177c2616-kube-api-access-wpbsh\") pod \"certified-operators-hl5qz\" (UID: \"bee79dcb-9870-4633-9f5b-56d2177c2616\") " pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.781491 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.781540 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bee79dcb-9870-4633-9f5b-56d2177c2616-catalog-content\") pod \"certified-operators-hl5qz\" (UID: \"bee79dcb-9870-4633-9f5b-56d2177c2616\") " pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.783300 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bee79dcb-9870-4633-9f5b-56d2177c2616-catalog-content\") pod \"certified-operators-hl5qz\" (UID: \"bee79dcb-9870-4633-9f5b-56d2177c2616\") " pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.781624 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bee79dcb-9870-4633-9f5b-56d2177c2616-utilities\") pod \"certified-operators-hl5qz\" (UID: \"bee79dcb-9870-4633-9f5b-56d2177c2616\") " pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:10:06 crc kubenswrapper[4825]: E0219 00:10:06.783747 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:07.283733897 +0000 UTC m=+152.974699944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.783846 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bee79dcb-9870-4633-9f5b-56d2177c2616-utilities\") pod \"certified-operators-hl5qz\" (UID: \"bee79dcb-9870-4633-9f5b-56d2177c2616\") " pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.809024 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpbsh\" (UniqueName: \"kubernetes.io/projected/bee79dcb-9870-4633-9f5b-56d2177c2616-kube-api-access-wpbsh\") pod \"certified-operators-hl5qz\" (UID: \"bee79dcb-9870-4633-9f5b-56d2177c2616\") " pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.890603 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:06 crc kubenswrapper[4825]: E0219 00:10:06.890693 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:07.390674396 +0000 UTC m=+153.081640443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.892527 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:06 crc kubenswrapper[4825]: E0219 00:10:06.892945 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:07.3929315 +0000 UTC m=+153.083897547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.992427 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.993640 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xs8g"] Feb 19 00:10:06 crc kubenswrapper[4825]: I0219 00:10:06.994264 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:06 crc kubenswrapper[4825]: E0219 00:10:06.994708 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:07.494677959 +0000 UTC m=+153.185644006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.095397 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:07 crc kubenswrapper[4825]: E0219 00:10:07.095758 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:07.595746157 +0000 UTC m=+153.286712204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.130540 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.131363 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.139639 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:10:07 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Feb 19 00:10:07 crc kubenswrapper[4825]: [+]process-running ok Feb 19 00:10:07 crc kubenswrapper[4825]: healthz check failed Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.139699 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.141470 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkb4q" Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.161860 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-psl6h"] Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.197065 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:07 crc kubenswrapper[4825]: E0219 00:10:07.197259 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:07.697238918 +0000 UTC m=+153.388204965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.197800 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:07 crc kubenswrapper[4825]: E0219 00:10:07.200498 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:07.700488824 +0000 UTC m=+153.391454871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.305387 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:07 crc kubenswrapper[4825]: E0219 00:10:07.306560 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:07.806542634 +0000 UTC m=+153.497508681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.406817 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:07 crc kubenswrapper[4825]: E0219 00:10:07.407368 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:07.907345852 +0000 UTC m=+153.598311899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.522303 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:07 crc kubenswrapper[4825]: E0219 00:10:07.522485 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:08.022459788 +0000 UTC m=+153.713425835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.522677 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:07 crc kubenswrapper[4825]: E0219 00:10:07.523040 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:08.023020726 +0000 UTC m=+153.713986773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.527499 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hwqsn" Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.532011 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qh6r7" Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.565635 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.589441 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hl5qz"] Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.623670 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:07 crc kubenswrapper[4825]: E0219 00:10:07.624836 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:08.124819557 +0000 UTC m=+153.815785604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.684870 4825 generic.go:334] "Generic (PLEG): container finished" podID="d7139b54-5e59-487a-bbf3-2ac657e5e39d" containerID="8e61f6547fd764655955db08e841b5b1810a87f4ca40cf52cbd59da2946a99f9" exitCode=0 Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.684976 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgq6r" event={"ID":"d7139b54-5e59-487a-bbf3-2ac657e5e39d","Type":"ContainerDied","Data":"8e61f6547fd764655955db08e841b5b1810a87f4ca40cf52cbd59da2946a99f9"} Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.685045 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgq6r" event={"ID":"d7139b54-5e59-487a-bbf3-2ac657e5e39d","Type":"ContainerStarted","Data":"32a9f035f6af65a419323b823347882027c54740d4788f3e4c1a561853bf65a4"} Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.687406 4825 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.689289 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psl6h" event={"ID":"87f1d2aa-1887-4322-a053-12e950fa2250","Type":"ContainerStarted","Data":"dd277e8978b6c6f3d32cde45ff36ef4110ab976443b23930c8c7c266f53e9d48"} Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.697245 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xs8g" event={"ID":"fe3e4f27-2ef4-4187-911b-135249a4454f","Type":"ContainerStarted","Data":"04495eea170b3b6ac7d6de27ddc9c4d259cae7e66e02a1d5d08aa8a23a9c3dae"} Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.697304 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xs8g" event={"ID":"fe3e4f27-2ef4-4187-911b-135249a4454f","Type":"ContainerStarted","Data":"b0dd44d98a1759ede0d507f2693ac82c227e5d1abf0f4d29ca8891e360eec11a"} Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.699443 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hl5qz" event={"ID":"bee79dcb-9870-4633-9f5b-56d2177c2616","Type":"ContainerStarted","Data":"1df127faf400a4d5a02dba3c2b5d71d01b3992e930c1b84e2e572da1f6388c6e"} Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.725661 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:07 crc kubenswrapper[4825]: E0219 00:10:07.726744 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:08.226705491 +0000 UTC m=+153.917671538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.780238 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.787162 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.791625 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.791972 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.793659 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.827527 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:07 crc kubenswrapper[4825]: E0219 00:10:07.827688 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:08.327652914 +0000 UTC m=+154.018618961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.828365 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:07 crc kubenswrapper[4825]: E0219 00:10:07.829364 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:08.329352349 +0000 UTC m=+154.020318596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.912398 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qqkkd" Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.930603 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.930807 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b59f0427-6570-4c83-9040-1267e58584aa-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b59f0427-6570-4c83-9040-1267e58584aa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.930851 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b59f0427-6570-4c83-9040-1267e58584aa-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b59f0427-6570-4c83-9040-1267e58584aa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:10:07 crc kubenswrapper[4825]: E0219 00:10:07.931027 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:08.431009697 +0000 UTC m=+154.121975744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.934114 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7g8mt"] Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.935400 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.941369 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 00:10:07 crc kubenswrapper[4825]: I0219 00:10:07.951056 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7g8mt"] Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.032675 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5430045e-a57c-4dd3-8205-737c277afd00-catalog-content\") pod \"redhat-marketplace-7g8mt\" (UID: \"5430045e-a57c-4dd3-8205-737c277afd00\") " pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.032717 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5430045e-a57c-4dd3-8205-737c277afd00-utilities\") pod \"redhat-marketplace-7g8mt\" (UID: \"5430045e-a57c-4dd3-8205-737c277afd00\") " pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.032792 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.032870 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lp79\" (UniqueName: \"kubernetes.io/projected/5430045e-a57c-4dd3-8205-737c277afd00-kube-api-access-7lp79\") pod \"redhat-marketplace-7g8mt\" (UID: \"5430045e-a57c-4dd3-8205-737c277afd00\") " pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.032901 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b59f0427-6570-4c83-9040-1267e58584aa-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b59f0427-6570-4c83-9040-1267e58584aa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.032956 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b59f0427-6570-4c83-9040-1267e58584aa-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b59f0427-6570-4c83-9040-1267e58584aa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.033041 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b59f0427-6570-4c83-9040-1267e58584aa-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b59f0427-6570-4c83-9040-1267e58584aa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:10:08 crc kubenswrapper[4825]: E0219 00:10:08.033970 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:08.533957654 +0000 UTC m=+154.224923701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.053032 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.059802 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b59f0427-6570-4c83-9040-1267e58584aa-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b59f0427-6570-4c83-9040-1267e58584aa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.133630 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.133694 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vt9c\" (UniqueName: \"kubernetes.io/projected/14aa4fde-5a0f-41ce-a61e-902c0597d698-kube-api-access-8vt9c\") pod \"14aa4fde-5a0f-41ce-a61e-902c0597d698\" (UID: \"14aa4fde-5a0f-41ce-a61e-902c0597d698\") " Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.133732 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14aa4fde-5a0f-41ce-a61e-902c0597d698-config-volume\") pod \"14aa4fde-5a0f-41ce-a61e-902c0597d698\" (UID: \"14aa4fde-5a0f-41ce-a61e-902c0597d698\") " Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.133757 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14aa4fde-5a0f-41ce-a61e-902c0597d698-secret-volume\") pod \"14aa4fde-5a0f-41ce-a61e-902c0597d698\" (UID: \"14aa4fde-5a0f-41ce-a61e-902c0597d698\") " Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.134578 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5430045e-a57c-4dd3-8205-737c277afd00-utilities\") pod \"redhat-marketplace-7g8mt\" (UID: \"5430045e-a57c-4dd3-8205-737c277afd00\") " pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.134648 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lp79\" (UniqueName: \"kubernetes.io/projected/5430045e-a57c-4dd3-8205-737c277afd00-kube-api-access-7lp79\") pod \"redhat-marketplace-7g8mt\" (UID: \"5430045e-a57c-4dd3-8205-737c277afd00\") " pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.134701 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5430045e-a57c-4dd3-8205-737c277afd00-catalog-content\") pod \"redhat-marketplace-7g8mt\" (UID: \"5430045e-a57c-4dd3-8205-737c277afd00\") " pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.135097 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5430045e-a57c-4dd3-8205-737c277afd00-catalog-content\") pod \"redhat-marketplace-7g8mt\" (UID: \"5430045e-a57c-4dd3-8205-737c277afd00\") " pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:10:08 crc kubenswrapper[4825]: E0219 00:10:08.135162 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:08.635147476 +0000 UTC m=+154.326113523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.139260 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14aa4fde-5a0f-41ce-a61e-902c0597d698-kube-api-access-8vt9c" (OuterVolumeSpecName: "kube-api-access-8vt9c") pod "14aa4fde-5a0f-41ce-a61e-902c0597d698" (UID: "14aa4fde-5a0f-41ce-a61e-902c0597d698"). InnerVolumeSpecName "kube-api-access-8vt9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.139583 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14aa4fde-5a0f-41ce-a61e-902c0597d698-config-volume" (OuterVolumeSpecName: "config-volume") pod "14aa4fde-5a0f-41ce-a61e-902c0597d698" (UID: "14aa4fde-5a0f-41ce-a61e-902c0597d698"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.141279 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:10:08 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Feb 19 00:10:08 crc kubenswrapper[4825]: [+]process-running ok Feb 19 00:10:08 crc kubenswrapper[4825]: healthz check failed Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.142886 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.143215 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14aa4fde-5a0f-41ce-a61e-902c0597d698-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "14aa4fde-5a0f-41ce-a61e-902c0597d698" (UID: "14aa4fde-5a0f-41ce-a61e-902c0597d698"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.143453 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5430045e-a57c-4dd3-8205-737c277afd00-utilities\") pod \"redhat-marketplace-7g8mt\" (UID: \"5430045e-a57c-4dd3-8205-737c277afd00\") " pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.158855 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.162958 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lp79\" (UniqueName: \"kubernetes.io/projected/5430045e-a57c-4dd3-8205-737c277afd00-kube-api-access-7lp79\") pod \"redhat-marketplace-7g8mt\" (UID: \"5430045e-a57c-4dd3-8205-737c277afd00\") " pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.163968 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.237271 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.237375 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vt9c\" (UniqueName: \"kubernetes.io/projected/14aa4fde-5a0f-41ce-a61e-902c0597d698-kube-api-access-8vt9c\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.237389 4825 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14aa4fde-5a0f-41ce-a61e-902c0597d698-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.237399 4825 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/14aa4fde-5a0f-41ce-a61e-902c0597d698-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:08 crc kubenswrapper[4825]: E0219 00:10:08.237738 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:08.737723822 +0000 UTC m=+154.428689869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.275171 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.332979 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ptklb"] Feb 19 00:10:08 crc kubenswrapper[4825]: E0219 00:10:08.333308 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10032c2d-c770-40c7-9d13-15b1de8a4257" containerName="pruner" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.333320 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="10032c2d-c770-40c7-9d13-15b1de8a4257" containerName="pruner" Feb 19 00:10:08 crc kubenswrapper[4825]: E0219 00:10:08.333331 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14aa4fde-5a0f-41ce-a61e-902c0597d698" containerName="collect-profiles" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.333338 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="14aa4fde-5a0f-41ce-a61e-902c0597d698" containerName="collect-profiles" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.333424 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="10032c2d-c770-40c7-9d13-15b1de8a4257" containerName="pruner" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.333433 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="14aa4fde-5a0f-41ce-a61e-902c0597d698" containerName="collect-profiles" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.334165 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.338560 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10032c2d-c770-40c7-9d13-15b1de8a4257-kubelet-dir\") pod \"10032c2d-c770-40c7-9d13-15b1de8a4257\" (UID: \"10032c2d-c770-40c7-9d13-15b1de8a4257\") " Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.338693 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.338755 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10032c2d-c770-40c7-9d13-15b1de8a4257-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "10032c2d-c770-40c7-9d13-15b1de8a4257" (UID: "10032c2d-c770-40c7-9d13-15b1de8a4257"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.338801 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10032c2d-c770-40c7-9d13-15b1de8a4257-kube-api-access\") pod \"10032c2d-c770-40c7-9d13-15b1de8a4257\" (UID: \"10032c2d-c770-40c7-9d13-15b1de8a4257\") " Feb 19 00:10:08 crc kubenswrapper[4825]: E0219 00:10:08.338908 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:08.838892933 +0000 UTC m=+154.529858980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.356637 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptklb"] Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.365436 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10032c2d-c770-40c7-9d13-15b1de8a4257-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "10032c2d-c770-40c7-9d13-15b1de8a4257" (UID: "10032c2d-c770-40c7-9d13-15b1de8a4257"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.380319 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecd4f41a-e074-49eb-950d-44a9a4140304-utilities\") pod \"redhat-marketplace-ptklb\" (UID: \"ecd4f41a-e074-49eb-950d-44a9a4140304\") " pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.380611 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.383336 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecd4f41a-e074-49eb-950d-44a9a4140304-catalog-content\") pod \"redhat-marketplace-ptklb\" (UID: \"ecd4f41a-e074-49eb-950d-44a9a4140304\") " pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.383935 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvl85\" (UniqueName: \"kubernetes.io/projected/ecd4f41a-e074-49eb-950d-44a9a4140304-kube-api-access-kvl85\") pod \"redhat-marketplace-ptklb\" (UID: \"ecd4f41a-e074-49eb-950d-44a9a4140304\") " pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.384004 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10032c2d-c770-40c7-9d13-15b1de8a4257-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.384017 4825 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10032c2d-c770-40c7-9d13-15b1de8a4257-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:08 crc kubenswrapper[4825]: E0219 00:10:08.384210 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:08.884196801 +0000 UTC m=+154.575162848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.485444 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.486041 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecd4f41a-e074-49eb-950d-44a9a4140304-utilities\") pod \"redhat-marketplace-ptklb\" (UID: \"ecd4f41a-e074-49eb-950d-44a9a4140304\") " pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.486103 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecd4f41a-e074-49eb-950d-44a9a4140304-catalog-content\") pod \"redhat-marketplace-ptklb\" (UID: \"ecd4f41a-e074-49eb-950d-44a9a4140304\") " pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.486152 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvl85\" (UniqueName: \"kubernetes.io/projected/ecd4f41a-e074-49eb-950d-44a9a4140304-kube-api-access-kvl85\") pod \"redhat-marketplace-ptklb\" (UID: \"ecd4f41a-e074-49eb-950d-44a9a4140304\") " pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:10:08 crc kubenswrapper[4825]: E0219 00:10:08.486551 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:08.98653425 +0000 UTC m=+154.677500297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.486947 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecd4f41a-e074-49eb-950d-44a9a4140304-utilities\") pod \"redhat-marketplace-ptklb\" (UID: \"ecd4f41a-e074-49eb-950d-44a9a4140304\") " pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.487002 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecd4f41a-e074-49eb-950d-44a9a4140304-catalog-content\") pod \"redhat-marketplace-ptklb\" (UID: \"ecd4f41a-e074-49eb-950d-44a9a4140304\") " pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.506142 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvl85\" (UniqueName: \"kubernetes.io/projected/ecd4f41a-e074-49eb-950d-44a9a4140304-kube-api-access-kvl85\") pod \"redhat-marketplace-ptklb\" (UID: \"ecd4f41a-e074-49eb-950d-44a9a4140304\") " pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.587800 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:08 crc kubenswrapper[4825]: E0219 00:10:08.588301 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:09.088264088 +0000 UTC m=+154.779230135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.599308 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.606640 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7g8mt"] Feb 19 00:10:08 crc kubenswrapper[4825]: W0219 00:10:08.616773 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb59f0427_6570_4c83_9040_1267e58584aa.slice/crio-2ca7ebf325f102be30f3ef9581c8b838ca2017f554d336d77047ed5831a1ab67 WatchSource:0}: Error finding container 2ca7ebf325f102be30f3ef9581c8b838ca2017f554d336d77047ed5831a1ab67: Status 404 returned error can't find the container with id 2ca7ebf325f102be30f3ef9581c8b838ca2017f554d336d77047ed5831a1ab67 Feb 19 00:10:08 crc kubenswrapper[4825]: W0219 00:10:08.655668 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5430045e_a57c_4dd3_8205_737c277afd00.slice/crio-3148d6f4a53ddc0a79debdc2e2ad2add1840cd71e1dd6a5c124df60ce7e3c04c WatchSource:0}: Error finding container 3148d6f4a53ddc0a79debdc2e2ad2add1840cd71e1dd6a5c124df60ce7e3c04c: Status 404 returned error can't find the container with id 3148d6f4a53ddc0a79debdc2e2ad2add1840cd71e1dd6a5c124df60ce7e3c04c Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.689414 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:08 crc kubenswrapper[4825]: E0219 00:10:08.689759 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:09.189728999 +0000 UTC m=+154.880695046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.690115 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:08 crc kubenswrapper[4825]: E0219 00:10:08.690657 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:09.190636678 +0000 UTC m=+154.881602725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.693119 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.720480 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hl5qz" event={"ID":"bee79dcb-9870-4633-9f5b-56d2177c2616","Type":"ContainerDied","Data":"92e9965e71f6d3f7c61249380fc85723ef1819a53ba75ba29746187a910df623"} Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.719487 4825 generic.go:334] "Generic (PLEG): container finished" podID="bee79dcb-9870-4633-9f5b-56d2177c2616" containerID="92e9965e71f6d3f7c61249380fc85723ef1819a53ba75ba29746187a910df623" exitCode=0 Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.732453 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f8slh" event={"ID":"0556f35e-37e8-4ae8-9bc4-32394e9f86ab","Type":"ContainerStarted","Data":"37d4227b6992b2f440a1802f68f85911cd41dccc21c970ff4d8ff547947fba06"} Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.737332 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b59f0427-6570-4c83-9040-1267e58584aa","Type":"ContainerStarted","Data":"2ca7ebf325f102be30f3ef9581c8b838ca2017f554d336d77047ed5831a1ab67"} Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.745350 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"10032c2d-c770-40c7-9d13-15b1de8a4257","Type":"ContainerDied","Data":"b14cd3e1cea137e9272d439dafe040592b03eda8c46d0059434199691fc7d749"} Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.745388 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b14cd3e1cea137e9272d439dafe040592b03eda8c46d0059434199691fc7d749" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.745457 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.757999 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g8mt" event={"ID":"5430045e-a57c-4dd3-8205-737c277afd00","Type":"ContainerStarted","Data":"3148d6f4a53ddc0a79debdc2e2ad2add1840cd71e1dd6a5c124df60ce7e3c04c"} Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.764062 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.764125 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-q6rwr" event={"ID":"14aa4fde-5a0f-41ce-a61e-902c0597d698","Type":"ContainerDied","Data":"2b79b03bf5693da7278e54a847e3fd48833ee65c43d772a2b3c18274b264c444"} Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.764156 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b79b03bf5693da7278e54a847e3fd48833ee65c43d772a2b3c18274b264c444" Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.766190 4825 generic.go:334] "Generic (PLEG): container finished" podID="87f1d2aa-1887-4322-a053-12e950fa2250" containerID="2957966b42006bd68da26013ec891475ad8051da28b5f0d842bbf4a39cb2fa09" exitCode=0 Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.766245 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psl6h" event={"ID":"87f1d2aa-1887-4322-a053-12e950fa2250","Type":"ContainerDied","Data":"2957966b42006bd68da26013ec891475ad8051da28b5f0d842bbf4a39cb2fa09"} Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.781291 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xs8g" event={"ID":"fe3e4f27-2ef4-4187-911b-135249a4454f","Type":"ContainerDied","Data":"04495eea170b3b6ac7d6de27ddc9c4d259cae7e66e02a1d5d08aa8a23a9c3dae"} Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.781245 4825 generic.go:334] "Generic (PLEG): container finished" podID="fe3e4f27-2ef4-4187-911b-135249a4454f" containerID="04495eea170b3b6ac7d6de27ddc9c4d259cae7e66e02a1d5d08aa8a23a9c3dae" exitCode=0 Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.790930 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:08 crc kubenswrapper[4825]: E0219 00:10:08.792037 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:09.292020736 +0000 UTC m=+154.982986783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.892102 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:08 crc kubenswrapper[4825]: E0219 00:10:08.892517 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:09.392485264 +0000 UTC m=+155.083451311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.917104 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptklb"] Feb 19 00:10:08 crc kubenswrapper[4825]: W0219 00:10:08.926306 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecd4f41a_e074_49eb_950d_44a9a4140304.slice/crio-f3e7fcc22564a6ec7c9d633b112add576e39fc64e5ba59f4adaabe0dd239b202 WatchSource:0}: Error finding container f3e7fcc22564a6ec7c9d633b112add576e39fc64e5ba59f4adaabe0dd239b202: Status 404 returned error can't find the container with id f3e7fcc22564a6ec7c9d633b112add576e39fc64e5ba59f4adaabe0dd239b202 Feb 19 00:10:08 crc kubenswrapper[4825]: I0219 00:10:08.993561 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:08 crc kubenswrapper[4825]: E0219 00:10:08.994296 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:09.494247223 +0000 UTC m=+155.185213270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.095374 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:09 crc kubenswrapper[4825]: E0219 00:10:09.095790 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:09.595774116 +0000 UTC m=+155.286740163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.125956 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:10:09 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Feb 19 00:10:09 crc kubenswrapper[4825]: [+]process-running ok Feb 19 00:10:09 crc kubenswrapper[4825]: healthz check failed Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.126043 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.195941 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:09 crc kubenswrapper[4825]: E0219 00:10:09.196137 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:09.696108469 +0000 UTC m=+155.387074516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.196207 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:09 crc kubenswrapper[4825]: E0219 00:10:09.196604 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:09.696586254 +0000 UTC m=+155.387552301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.297408 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:09 crc kubenswrapper[4825]: E0219 00:10:09.297632 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:09.79760378 +0000 UTC m=+155.488569827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.297717 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:09 crc kubenswrapper[4825]: E0219 00:10:09.298084 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:09.798069586 +0000 UTC m=+155.489035623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.330701 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zstqt"] Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.332087 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.335676 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.344675 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zstqt"] Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.398434 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:09 crc kubenswrapper[4825]: E0219 00:10:09.398648 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:09.898619276 +0000 UTC m=+155.589585323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.398736 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.398792 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e372937-4e80-4153-bf75-7811efb6750b-utilities\") pod \"redhat-operators-zstqt\" (UID: \"1e372937-4e80-4153-bf75-7811efb6750b\") " pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.398888 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx2m5\" (UniqueName: \"kubernetes.io/projected/1e372937-4e80-4153-bf75-7811efb6750b-kube-api-access-mx2m5\") pod \"redhat-operators-zstqt\" (UID: \"1e372937-4e80-4153-bf75-7811efb6750b\") " pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.398975 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e372937-4e80-4153-bf75-7811efb6750b-catalog-content\") pod \"redhat-operators-zstqt\" (UID: \"1e372937-4e80-4153-bf75-7811efb6750b\") " pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:10:09 crc kubenswrapper[4825]: E0219 00:10:09.399107 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:09.899091241 +0000 UTC m=+155.590057398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.499860 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.500043 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e372937-4e80-4153-bf75-7811efb6750b-utilities\") pod \"redhat-operators-zstqt\" (UID: \"1e372937-4e80-4153-bf75-7811efb6750b\") " pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:10:09 crc kubenswrapper[4825]: E0219 00:10:09.500095 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:10.000065845 +0000 UTC m=+155.691031892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.500146 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx2m5\" (UniqueName: \"kubernetes.io/projected/1e372937-4e80-4153-bf75-7811efb6750b-kube-api-access-mx2m5\") pod \"redhat-operators-zstqt\" (UID: \"1e372937-4e80-4153-bf75-7811efb6750b\") " pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.500281 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e372937-4e80-4153-bf75-7811efb6750b-catalog-content\") pod \"redhat-operators-zstqt\" (UID: \"1e372937-4e80-4153-bf75-7811efb6750b\") " pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.500568 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e372937-4e80-4153-bf75-7811efb6750b-utilities\") pod \"redhat-operators-zstqt\" (UID: \"1e372937-4e80-4153-bf75-7811efb6750b\") " pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.501002 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e372937-4e80-4153-bf75-7811efb6750b-catalog-content\") pod \"redhat-operators-zstqt\" (UID: \"1e372937-4e80-4153-bf75-7811efb6750b\") " pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.527326 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx2m5\" (UniqueName: \"kubernetes.io/projected/1e372937-4e80-4153-bf75-7811efb6750b-kube-api-access-mx2m5\") pod \"redhat-operators-zstqt\" (UID: \"1e372937-4e80-4153-bf75-7811efb6750b\") " pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.601263 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:09 crc kubenswrapper[4825]: E0219 00:10:09.601657 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:10.101641759 +0000 UTC m=+155.792607806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.654888 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.702702 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:09 crc kubenswrapper[4825]: E0219 00:10:09.703078 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:10.203049127 +0000 UTC m=+155.894015194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.703284 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:09 crc kubenswrapper[4825]: E0219 00:10:09.703680 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:10.203659747 +0000 UTC m=+155.894625794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.734614 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sp65c"] Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.789237 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sp65c"] Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.789399 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.802620 4825 generic.go:334] "Generic (PLEG): container finished" podID="5430045e-a57c-4dd3-8205-737c277afd00" containerID="d4351e7bd44789a11607204315b513bf90b99893453154b5a3175cbaafe60ee4" exitCode=0 Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.802689 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g8mt" event={"ID":"5430045e-a57c-4dd3-8205-737c277afd00","Type":"ContainerDied","Data":"d4351e7bd44789a11607204315b513bf90b99893453154b5a3175cbaafe60ee4"} Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.804139 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:09 crc kubenswrapper[4825]: E0219 00:10:09.804677 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:10.304661363 +0000 UTC m=+155.995627410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.808495 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptklb" event={"ID":"ecd4f41a-e074-49eb-950d-44a9a4140304","Type":"ContainerStarted","Data":"f3e7fcc22564a6ec7c9d633b112add576e39fc64e5ba59f4adaabe0dd239b202"} Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.822224 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f8slh" event={"ID":"0556f35e-37e8-4ae8-9bc4-32394e9f86ab","Type":"ContainerStarted","Data":"5e310bd0e6ce890839786f9d5713122b2a0056b41a302a93060bb1e12960f646"} Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.826163 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b59f0427-6570-4c83-9040-1267e58584aa","Type":"ContainerStarted","Data":"317c7d86384dff3b8dc864452114d7ef0090a725bfeb6c89adfa4342626e7b3f"} Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.890191 4825 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.907326 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9f8ad3-8653-427d-ad82-6b0157a57827-utilities\") pod \"redhat-operators-sp65c\" (UID: \"3f9f8ad3-8653-427d-ad82-6b0157a57827\") " pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.907380 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72mc4\" (UniqueName: \"kubernetes.io/projected/3f9f8ad3-8653-427d-ad82-6b0157a57827-kube-api-access-72mc4\") pod \"redhat-operators-sp65c\" (UID: \"3f9f8ad3-8653-427d-ad82-6b0157a57827\") " pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.907460 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.907520 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9f8ad3-8653-427d-ad82-6b0157a57827-catalog-content\") pod \"redhat-operators-sp65c\" (UID: \"3f9f8ad3-8653-427d-ad82-6b0157a57827\") " pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:10:09 crc kubenswrapper[4825]: E0219 00:10:09.908654 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:10.408640675 +0000 UTC m=+156.099606722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:09 crc kubenswrapper[4825]: I0219 00:10:09.923686 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zstqt"] Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.008535 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:10 crc kubenswrapper[4825]: E0219 00:10:10.008923 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:10.508883564 +0000 UTC m=+156.199849621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.009571 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9f8ad3-8653-427d-ad82-6b0157a57827-utilities\") pod \"redhat-operators-sp65c\" (UID: \"3f9f8ad3-8653-427d-ad82-6b0157a57827\") " pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.009637 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72mc4\" (UniqueName: \"kubernetes.io/projected/3f9f8ad3-8653-427d-ad82-6b0157a57827-kube-api-access-72mc4\") pod \"redhat-operators-sp65c\" (UID: \"3f9f8ad3-8653-427d-ad82-6b0157a57827\") " pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.009710 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.009758 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9f8ad3-8653-427d-ad82-6b0157a57827-catalog-content\") pod \"redhat-operators-sp65c\" (UID: \"3f9f8ad3-8653-427d-ad82-6b0157a57827\") " pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.010223 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9f8ad3-8653-427d-ad82-6b0157a57827-utilities\") pod \"redhat-operators-sp65c\" (UID: \"3f9f8ad3-8653-427d-ad82-6b0157a57827\") " pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.010323 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9f8ad3-8653-427d-ad82-6b0157a57827-catalog-content\") pod \"redhat-operators-sp65c\" (UID: \"3f9f8ad3-8653-427d-ad82-6b0157a57827\") " pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:10:10 crc kubenswrapper[4825]: E0219 00:10:10.010341 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:10.510308181 +0000 UTC m=+156.201274398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.044499 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72mc4\" (UniqueName: \"kubernetes.io/projected/3f9f8ad3-8653-427d-ad82-6b0157a57827-kube-api-access-72mc4\") pod \"redhat-operators-sp65c\" (UID: \"3f9f8ad3-8653-427d-ad82-6b0157a57827\") " pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.111042 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:10 crc kubenswrapper[4825]: E0219 00:10:10.111253 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:10.611221144 +0000 UTC m=+156.302187191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.111567 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:10 crc kubenswrapper[4825]: E0219 00:10:10.112048 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:10.61202925 +0000 UTC m=+156.302995297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.124945 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.131402 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:10:10 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Feb 19 00:10:10 crc kubenswrapper[4825]: [+]process-running ok Feb 19 00:10:10 crc kubenswrapper[4825]: healthz check failed Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.131479 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.213187 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:10 crc kubenswrapper[4825]: E0219 00:10:10.213400 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:10:10.713364246 +0000 UTC m=+156.404330283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.213724 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:10 crc kubenswrapper[4825]: E0219 00:10:10.214120 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:10:10.714112601 +0000 UTC m=+156.405078648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rlctl" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.243385 4825 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T00:10:09.890220884Z","Handler":null,"Name":""} Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.270988 4825 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.271045 4825 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.319545 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.329974 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.396224 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sp65c"] Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.421256 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.448618 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.448686 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.637587 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rlctl\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.777927 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.783770 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-rm422" Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.784327 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.865871 4825 generic.go:334] "Generic (PLEG): container finished" podID="ecd4f41a-e074-49eb-950d-44a9a4140304" containerID="b7bf43c727837cbd11ac1a6ff1344b8e9f4844cf631404f4bcd3fc00474d87cf" exitCode=0 Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.866940 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptklb" event={"ID":"ecd4f41a-e074-49eb-950d-44a9a4140304","Type":"ContainerDied","Data":"b7bf43c727837cbd11ac1a6ff1344b8e9f4844cf631404f4bcd3fc00474d87cf"} Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.877080 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-f8slh" event={"ID":"0556f35e-37e8-4ae8-9bc4-32394e9f86ab","Type":"ContainerStarted","Data":"c6dd2e48ab090c0ce1d71f783581b9ee22b8bc5da2525dbfad8c7a9cefa5e9ad"} Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.886636 4825 generic.go:334] "Generic (PLEG): container finished" podID="b59f0427-6570-4c83-9040-1267e58584aa" containerID="317c7d86384dff3b8dc864452114d7ef0090a725bfeb6c89adfa4342626e7b3f" exitCode=0 Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.886793 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b59f0427-6570-4c83-9040-1267e58584aa","Type":"ContainerDied","Data":"317c7d86384dff3b8dc864452114d7ef0090a725bfeb6c89adfa4342626e7b3f"} Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.888448 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp65c" event={"ID":"3f9f8ad3-8653-427d-ad82-6b0157a57827","Type":"ContainerStarted","Data":"11a52d4484934cc3a457548047af163a688cc5a4c13144c9b126e8cea87af0ba"} Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.889918 4825 generic.go:334] "Generic (PLEG): container finished" podID="1e372937-4e80-4153-bf75-7811efb6750b" containerID="cfeec4b06ba12621c3c01feed920d86944180ac4b996dc78a20a6928d56fac43" exitCode=0 Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.890839 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zstqt" event={"ID":"1e372937-4e80-4153-bf75-7811efb6750b","Type":"ContainerDied","Data":"cfeec4b06ba12621c3c01feed920d86944180ac4b996dc78a20a6928d56fac43"} Feb 19 00:10:10 crc kubenswrapper[4825]: I0219 00:10:10.890865 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zstqt" event={"ID":"1e372937-4e80-4153-bf75-7811efb6750b","Type":"ContainerStarted","Data":"06dd147a8bd97a80361db08b3f90fe62fb9ad08513d927731503f19feea7c616"} Feb 19 00:10:11 crc kubenswrapper[4825]: I0219 00:10:11.082844 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 19 00:10:11 crc kubenswrapper[4825]: I0219 00:10:11.122655 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rlctl"] Feb 19 00:10:11 crc kubenswrapper[4825]: I0219 00:10:11.126209 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:10:11 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Feb 19 00:10:11 crc kubenswrapper[4825]: [+]process-running ok Feb 19 00:10:11 crc kubenswrapper[4825]: healthz check failed Feb 19 00:10:11 crc kubenswrapper[4825]: I0219 00:10:11.126262 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:10:11 crc kubenswrapper[4825]: I0219 00:10:11.899535 4825 generic.go:334] "Generic (PLEG): container finished" podID="3f9f8ad3-8653-427d-ad82-6b0157a57827" containerID="37044e900b85346e4dc290d5e1ac7842046efeed3ead0bd24e377240e5a98a8d" exitCode=0 Feb 19 00:10:11 crc kubenswrapper[4825]: I0219 00:10:11.899663 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp65c" event={"ID":"3f9f8ad3-8653-427d-ad82-6b0157a57827","Type":"ContainerDied","Data":"37044e900b85346e4dc290d5e1ac7842046efeed3ead0bd24e377240e5a98a8d"} Feb 19 00:10:11 crc kubenswrapper[4825]: I0219 00:10:11.901695 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" event={"ID":"43b75e6e-5b2d-4690-b500-20ad18a1e042","Type":"ContainerStarted","Data":"a519ee1f12c9bedad67b71edd3e97b08c7a2035053dc8d909b23b8b83578d8d6"} Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.134245 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:10:12 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Feb 19 00:10:12 crc kubenswrapper[4825]: [+]process-running ok Feb 19 00:10:12 crc kubenswrapper[4825]: healthz check failed Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.134820 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.232395 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.253858 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-f8slh" podStartSLOduration=18.253838174 podStartE2EDuration="18.253838174s" podCreationTimestamp="2026-02-19 00:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:11.969833339 +0000 UTC m=+157.660799406" watchObservedRunningTime="2026-02-19 00:10:12.253838174 +0000 UTC m=+157.944804221" Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.351590 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b59f0427-6570-4c83-9040-1267e58584aa-kubelet-dir\") pod \"b59f0427-6570-4c83-9040-1267e58584aa\" (UID: \"b59f0427-6570-4c83-9040-1267e58584aa\") " Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.351694 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b59f0427-6570-4c83-9040-1267e58584aa-kube-api-access\") pod \"b59f0427-6570-4c83-9040-1267e58584aa\" (UID: \"b59f0427-6570-4c83-9040-1267e58584aa\") " Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.351747 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b59f0427-6570-4c83-9040-1267e58584aa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b59f0427-6570-4c83-9040-1267e58584aa" (UID: "b59f0427-6570-4c83-9040-1267e58584aa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.352037 4825 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b59f0427-6570-4c83-9040-1267e58584aa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.359385 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b59f0427-6570-4c83-9040-1267e58584aa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b59f0427-6570-4c83-9040-1267e58584aa" (UID: "b59f0427-6570-4c83-9040-1267e58584aa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.454046 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b59f0427-6570-4c83-9040-1267e58584aa-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.910873 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b59f0427-6570-4c83-9040-1267e58584aa","Type":"ContainerDied","Data":"2ca7ebf325f102be30f3ef9581c8b838ca2017f554d336d77047ed5831a1ab67"} Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.910919 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ca7ebf325f102be30f3ef9581c8b838ca2017f554d336d77047ed5831a1ab67" Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.910942 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.912726 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" event={"ID":"43b75e6e-5b2d-4690-b500-20ad18a1e042","Type":"ContainerStarted","Data":"e52a3b385b7356dbedb87d9fa47a8b54917e37e27bcec2a4b4f4bffe8644d490"} Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.912973 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.935455 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xbfwq" Feb 19 00:10:12 crc kubenswrapper[4825]: I0219 00:10:12.935311 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" podStartSLOduration=136.935292056 podStartE2EDuration="2m16.935292056s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:12.931800812 +0000 UTC m=+158.622766859" watchObservedRunningTime="2026-02-19 00:10:12.935292056 +0000 UTC m=+158.626258103" Feb 19 00:10:13 crc kubenswrapper[4825]: I0219 00:10:13.127446 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:10:13 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Feb 19 00:10:13 crc kubenswrapper[4825]: [+]process-running ok Feb 19 00:10:13 crc kubenswrapper[4825]: healthz check failed Feb 19 00:10:13 crc kubenswrapper[4825]: I0219 00:10:13.128001 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:10:14 crc kubenswrapper[4825]: I0219 00:10:14.238525 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:10:14 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Feb 19 00:10:14 crc kubenswrapper[4825]: [+]process-running ok Feb 19 00:10:14 crc kubenswrapper[4825]: healthz check failed Feb 19 00:10:14 crc kubenswrapper[4825]: I0219 00:10:14.238600 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:10:15 crc kubenswrapper[4825]: I0219 00:10:15.041092 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-bf56f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 00:10:15 crc kubenswrapper[4825]: I0219 00:10:15.041658 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 00:10:15 crc kubenswrapper[4825]: I0219 00:10:15.042265 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-bf56f container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 00:10:15 crc kubenswrapper[4825]: I0219 00:10:15.042294 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 00:10:15 crc kubenswrapper[4825]: I0219 00:10:15.132376 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:10:15 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Feb 19 00:10:15 crc kubenswrapper[4825]: [+]process-running ok Feb 19 00:10:15 crc kubenswrapper[4825]: healthz check failed Feb 19 00:10:15 crc kubenswrapper[4825]: I0219 00:10:15.132441 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:10:15 crc kubenswrapper[4825]: I0219 00:10:15.979621 4825 patch_prober.go:28] interesting pod/console-f9d7485db-92549 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 19 00:10:15 crc kubenswrapper[4825]: I0219 00:10:15.979734 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-92549" podUID="1f6724cf-dc1e-44cc-8f59-91d3e8b00970" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 19 00:10:16 crc kubenswrapper[4825]: I0219 00:10:16.125477 4825 patch_prober.go:28] interesting pod/router-default-5444994796-lhzt4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:10:16 crc kubenswrapper[4825]: [-]has-synced failed: reason withheld Feb 19 00:10:16 crc kubenswrapper[4825]: [+]process-running ok Feb 19 00:10:16 crc kubenswrapper[4825]: healthz check failed Feb 19 00:10:16 crc kubenswrapper[4825]: I0219 00:10:16.125618 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-lhzt4" podUID="bda9acd8-7428-4ed2-aa1c-54c759b39e97" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:10:17 crc kubenswrapper[4825]: I0219 00:10:17.128246 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:10:17 crc kubenswrapper[4825]: I0219 00:10:17.131766 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-lhzt4" Feb 19 00:10:20 crc kubenswrapper[4825]: I0219 00:10:20.188757 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs\") pod \"network-metrics-daemon-bhnmw\" (UID: \"80aa664d-e111-41f6-815d-f4185e1f72ff\") " pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:10:20 crc kubenswrapper[4825]: I0219 00:10:20.288005 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80aa664d-e111-41f6-815d-f4185e1f72ff-metrics-certs\") pod \"network-metrics-daemon-bhnmw\" (UID: \"80aa664d-e111-41f6-815d-f4185e1f72ff\") " pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:10:20 crc kubenswrapper[4825]: I0219 00:10:20.583165 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bhnmw" Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.040352 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-bf56f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.041043 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.044727 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-bf56f container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.044782 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.044829 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-bf56f" Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.045467 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-bf56f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.045591 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.045788 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"0261c888f549af7767ac3ac7f5fdc1b48ae7efd01bbe1824d2e5962a970915fe"} pod="openshift-console/downloads-7954f5f757-bf56f" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.045915 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" containerID="cri-o://0261c888f549af7767ac3ac7f5fdc1b48ae7efd01bbe1824d2e5962a970915fe" gracePeriod=2 Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.491740 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bwrjg"] Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.492018 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" podUID="f110da82-46c4-44d5-91f6-195be763d96f" containerName="controller-manager" containerID="cri-o://86bfc99fc089e0ddaf9e2975beb842e593f60bbc8deda0da46c2b3a7213ff6e9" gracePeriod=30 Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.515974 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn"] Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.516201 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" podUID="fff3e916-b71f-44c3-a4ac-a78efc547a28" containerName="route-controller-manager" containerID="cri-o://c1ef257889273d81d46af18b3db925a09e8995065bd63fbcaebf4958f15d9e9a" gracePeriod=30 Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.536286 4825 generic.go:334] "Generic (PLEG): container finished" podID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerID="0261c888f549af7767ac3ac7f5fdc1b48ae7efd01bbe1824d2e5962a970915fe" exitCode=0 Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.536352 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bf56f" event={"ID":"29be03fe-da22-41a7-9243-67aa815fbfb1","Type":"ContainerDied","Data":"0261c888f549af7767ac3ac7f5fdc1b48ae7efd01bbe1824d2e5962a970915fe"} Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.984554 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-92549" Feb 19 00:10:25 crc kubenswrapper[4825]: I0219 00:10:25.992765 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-92549" Feb 19 00:10:26 crc kubenswrapper[4825]: I0219 00:10:26.091935 4825 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-96ctn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 19 00:10:26 crc kubenswrapper[4825]: I0219 00:10:26.092032 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" podUID="fff3e916-b71f-44c3-a4ac-a78efc547a28" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 19 00:10:26 crc kubenswrapper[4825]: I0219 00:10:26.543778 4825 generic.go:334] "Generic (PLEG): container finished" podID="f110da82-46c4-44d5-91f6-195be763d96f" containerID="86bfc99fc089e0ddaf9e2975beb842e593f60bbc8deda0da46c2b3a7213ff6e9" exitCode=0 Feb 19 00:10:26 crc kubenswrapper[4825]: I0219 00:10:26.543861 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" event={"ID":"f110da82-46c4-44d5-91f6-195be763d96f","Type":"ContainerDied","Data":"86bfc99fc089e0ddaf9e2975beb842e593f60bbc8deda0da46c2b3a7213ff6e9"} Feb 19 00:10:26 crc kubenswrapper[4825]: I0219 00:10:26.546585 4825 generic.go:334] "Generic (PLEG): container finished" podID="fff3e916-b71f-44c3-a4ac-a78efc547a28" containerID="c1ef257889273d81d46af18b3db925a09e8995065bd63fbcaebf4958f15d9e9a" exitCode=0 Feb 19 00:10:26 crc kubenswrapper[4825]: I0219 00:10:26.546712 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" event={"ID":"fff3e916-b71f-44c3-a4ac-a78efc547a28","Type":"ContainerDied","Data":"c1ef257889273d81d46af18b3db925a09e8995065bd63fbcaebf4958f15d9e9a"} Feb 19 00:10:28 crc kubenswrapper[4825]: I0219 00:10:28.823490 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:10:28 crc kubenswrapper[4825]: I0219 00:10:28.823904 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.391875 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.436205 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65779f4744-dlj22"] Feb 19 00:10:30 crc kubenswrapper[4825]: E0219 00:10:30.436573 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b59f0427-6570-4c83-9040-1267e58584aa" containerName="pruner" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.436591 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="b59f0427-6570-4c83-9040-1267e58584aa" containerName="pruner" Feb 19 00:10:30 crc kubenswrapper[4825]: E0219 00:10:30.436613 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f110da82-46c4-44d5-91f6-195be763d96f" containerName="controller-manager" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.436623 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f110da82-46c4-44d5-91f6-195be763d96f" containerName="controller-manager" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.436759 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f110da82-46c4-44d5-91f6-195be763d96f" containerName="controller-manager" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.436792 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="b59f0427-6570-4c83-9040-1267e58584aa" containerName="pruner" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.437412 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.439392 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65779f4744-dlj22"] Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.516137 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-config\") pod \"f110da82-46c4-44d5-91f6-195be763d96f\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.516323 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-proxy-ca-bundles\") pod \"f110da82-46c4-44d5-91f6-195be763d96f\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.516353 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-client-ca\") pod \"f110da82-46c4-44d5-91f6-195be763d96f\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.516385 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f110da82-46c4-44d5-91f6-195be763d96f-serving-cert\") pod \"f110da82-46c4-44d5-91f6-195be763d96f\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.516472 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jplj\" (UniqueName: \"kubernetes.io/projected/f110da82-46c4-44d5-91f6-195be763d96f-kube-api-access-7jplj\") pod \"f110da82-46c4-44d5-91f6-195be763d96f\" (UID: \"f110da82-46c4-44d5-91f6-195be763d96f\") " Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.516734 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c8db180-cc65-4921-97d2-47ac7678b6c0-serving-cert\") pod \"controller-manager-65779f4744-dlj22\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.516788 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-config\") pod \"controller-manager-65779f4744-dlj22\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.516819 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-client-ca\") pod \"controller-manager-65779f4744-dlj22\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.516891 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-proxy-ca-bundles\") pod \"controller-manager-65779f4744-dlj22\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.516930 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9gqr\" (UniqueName: \"kubernetes.io/projected/0c8db180-cc65-4921-97d2-47ac7678b6c0-kube-api-access-z9gqr\") pod \"controller-manager-65779f4744-dlj22\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.517395 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-client-ca" (OuterVolumeSpecName: "client-ca") pod "f110da82-46c4-44d5-91f6-195be763d96f" (UID: "f110da82-46c4-44d5-91f6-195be763d96f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.517929 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-config" (OuterVolumeSpecName: "config") pod "f110da82-46c4-44d5-91f6-195be763d96f" (UID: "f110da82-46c4-44d5-91f6-195be763d96f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.517977 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f110da82-46c4-44d5-91f6-195be763d96f" (UID: "f110da82-46c4-44d5-91f6-195be763d96f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.523299 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f110da82-46c4-44d5-91f6-195be763d96f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f110da82-46c4-44d5-91f6-195be763d96f" (UID: "f110da82-46c4-44d5-91f6-195be763d96f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.523391 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f110da82-46c4-44d5-91f6-195be763d96f-kube-api-access-7jplj" (OuterVolumeSpecName: "kube-api-access-7jplj") pod "f110da82-46c4-44d5-91f6-195be763d96f" (UID: "f110da82-46c4-44d5-91f6-195be763d96f"). InnerVolumeSpecName "kube-api-access-7jplj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.578319 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" event={"ID":"f110da82-46c4-44d5-91f6-195be763d96f","Type":"ContainerDied","Data":"d32fd3dc89551756a4336a7d7fa5b34baccbac6992b86cf9c7eb6db137f108a1"} Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.578411 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bwrjg" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.578433 4825 scope.go:117] "RemoveContainer" containerID="86bfc99fc089e0ddaf9e2975beb842e593f60bbc8deda0da46c2b3a7213ff6e9" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.606683 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bwrjg"] Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.609091 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bwrjg"] Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.618067 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c8db180-cc65-4921-97d2-47ac7678b6c0-serving-cert\") pod \"controller-manager-65779f4744-dlj22\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.618104 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-config\") pod \"controller-manager-65779f4744-dlj22\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.618134 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-client-ca\") pod \"controller-manager-65779f4744-dlj22\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.618163 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-proxy-ca-bundles\") pod \"controller-manager-65779f4744-dlj22\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.618191 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9gqr\" (UniqueName: \"kubernetes.io/projected/0c8db180-cc65-4921-97d2-47ac7678b6c0-kube-api-access-z9gqr\") pod \"controller-manager-65779f4744-dlj22\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.618223 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.618234 4825 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.618247 4825 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f110da82-46c4-44d5-91f6-195be763d96f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.618354 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f110da82-46c4-44d5-91f6-195be763d96f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.618719 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jplj\" (UniqueName: \"kubernetes.io/projected/f110da82-46c4-44d5-91f6-195be763d96f-kube-api-access-7jplj\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.619312 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-client-ca\") pod \"controller-manager-65779f4744-dlj22\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.619922 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-proxy-ca-bundles\") pod \"controller-manager-65779f4744-dlj22\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.620114 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-config\") pod \"controller-manager-65779f4744-dlj22\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.623853 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c8db180-cc65-4921-97d2-47ac7678b6c0-serving-cert\") pod \"controller-manager-65779f4744-dlj22\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.635496 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9gqr\" (UniqueName: \"kubernetes.io/projected/0c8db180-cc65-4921-97d2-47ac7678b6c0-kube-api-access-z9gqr\") pod \"controller-manager-65779f4744-dlj22\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.756630 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:30 crc kubenswrapper[4825]: I0219 00:10:30.791236 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:10:31 crc kubenswrapper[4825]: I0219 00:10:31.072984 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f110da82-46c4-44d5-91f6-195be763d96f" path="/var/lib/kubelet/pods/f110da82-46c4-44d5-91f6-195be763d96f/volumes" Feb 19 00:10:35 crc kubenswrapper[4825]: I0219 00:10:35.040737 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-bf56f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 00:10:35 crc kubenswrapper[4825]: I0219 00:10:35.041264 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 00:10:36 crc kubenswrapper[4825]: I0219 00:10:36.624743 4825 generic.go:334] "Generic (PLEG): container finished" podID="7e0cdf1c-faf9-4a21-8beb-1b712bd266fc" containerID="42700d989a15b37e2c649b3aaa6889f8783a5666cfea1cfa99bfb58a6cdf6cd5" exitCode=0 Feb 19 00:10:36 crc kubenswrapper[4825]: I0219 00:10:36.624841 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29524320-khn5f" event={"ID":"7e0cdf1c-faf9-4a21-8beb-1b712bd266fc","Type":"ContainerDied","Data":"42700d989a15b37e2c649b3aaa6889f8783a5666cfea1cfa99bfb58a6cdf6cd5"} Feb 19 00:10:37 crc kubenswrapper[4825]: I0219 00:10:37.091182 4825 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-96ctn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 00:10:37 crc kubenswrapper[4825]: I0219 00:10:37.091262 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" podUID="fff3e916-b71f-44c3-a4ac-a78efc547a28" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 00:10:37 crc kubenswrapper[4825]: I0219 00:10:37.543316 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-97jhp" Feb 19 00:10:40 crc kubenswrapper[4825]: E0219 00:10:40.908803 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 00:10:40 crc kubenswrapper[4825]: E0219 00:10:40.910847 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72mc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sp65c_openshift-marketplace(3f9f8ad3-8653-427d-ad82-6b0157a57827): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:10:40 crc kubenswrapper[4825]: E0219 00:10:40.912293 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-sp65c" podUID="3f9f8ad3-8653-427d-ad82-6b0157a57827" Feb 19 00:10:43 crc kubenswrapper[4825]: I0219 00:10:43.101031 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:10:43 crc kubenswrapper[4825]: E0219 00:10:43.981836 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sp65c" podUID="3f9f8ad3-8653-427d-ad82-6b0157a57827" Feb 19 00:10:44 crc kubenswrapper[4825]: E0219 00:10:44.122169 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 00:10:44 crc kubenswrapper[4825]: E0219 00:10:44.122368 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvl85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ptklb_openshift-marketplace(ecd4f41a-e074-49eb-950d-44a9a4140304): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:10:44 crc kubenswrapper[4825]: E0219 00:10:44.127289 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-ptklb" podUID="ecd4f41a-e074-49eb-950d-44a9a4140304" Feb 19 00:10:44 crc kubenswrapper[4825]: E0219 00:10:44.241433 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 00:10:44 crc kubenswrapper[4825]: E0219 00:10:44.241646 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lp79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-7g8mt_openshift-marketplace(5430045e-a57c-4dd3-8205-737c277afd00): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:10:44 crc kubenswrapper[4825]: E0219 00:10:44.243028 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-7g8mt" podUID="5430045e-a57c-4dd3-8205-737c277afd00" Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.042008 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-bf56f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.042535 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.473554 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65779f4744-dlj22"] Feb 19 00:10:45 crc kubenswrapper[4825]: E0219 00:10:45.673999 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ptklb" podUID="ecd4f41a-e074-49eb-950d-44a9a4140304" Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.740658 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.743751 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29524320-khn5f" Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.749811 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7e0cdf1c-faf9-4a21-8beb-1b712bd266fc-serviceca\") pod \"7e0cdf1c-faf9-4a21-8beb-1b712bd266fc\" (UID: \"7e0cdf1c-faf9-4a21-8beb-1b712bd266fc\") " Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.749849 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fff3e916-b71f-44c3-a4ac-a78efc547a28-client-ca\") pod \"fff3e916-b71f-44c3-a4ac-a78efc547a28\" (UID: \"fff3e916-b71f-44c3-a4ac-a78efc547a28\") " Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.749882 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff3e916-b71f-44c3-a4ac-a78efc547a28-config\") pod \"fff3e916-b71f-44c3-a4ac-a78efc547a28\" (UID: \"fff3e916-b71f-44c3-a4ac-a78efc547a28\") " Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.749920 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ztk7\" (UniqueName: \"kubernetes.io/projected/fff3e916-b71f-44c3-a4ac-a78efc547a28-kube-api-access-5ztk7\") pod \"fff3e916-b71f-44c3-a4ac-a78efc547a28\" (UID: \"fff3e916-b71f-44c3-a4ac-a78efc547a28\") " Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.749947 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fff3e916-b71f-44c3-a4ac-a78efc547a28-serving-cert\") pod \"fff3e916-b71f-44c3-a4ac-a78efc547a28\" (UID: \"fff3e916-b71f-44c3-a4ac-a78efc547a28\") " Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.749990 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsrns\" (UniqueName: \"kubernetes.io/projected/7e0cdf1c-faf9-4a21-8beb-1b712bd266fc-kube-api-access-xsrns\") pod \"7e0cdf1c-faf9-4a21-8beb-1b712bd266fc\" (UID: \"7e0cdf1c-faf9-4a21-8beb-1b712bd266fc\") " Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.751412 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0cdf1c-faf9-4a21-8beb-1b712bd266fc-serviceca" (OuterVolumeSpecName: "serviceca") pod "7e0cdf1c-faf9-4a21-8beb-1b712bd266fc" (UID: "7e0cdf1c-faf9-4a21-8beb-1b712bd266fc"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.751569 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fff3e916-b71f-44c3-a4ac-a78efc547a28-client-ca" (OuterVolumeSpecName: "client-ca") pod "fff3e916-b71f-44c3-a4ac-a78efc547a28" (UID: "fff3e916-b71f-44c3-a4ac-a78efc547a28"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.753042 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fff3e916-b71f-44c3-a4ac-a78efc547a28-config" (OuterVolumeSpecName: "config") pod "fff3e916-b71f-44c3-a4ac-a78efc547a28" (UID: "fff3e916-b71f-44c3-a4ac-a78efc547a28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.757786 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e0cdf1c-faf9-4a21-8beb-1b712bd266fc-kube-api-access-xsrns" (OuterVolumeSpecName: "kube-api-access-xsrns") pod "7e0cdf1c-faf9-4a21-8beb-1b712bd266fc" (UID: "7e0cdf1c-faf9-4a21-8beb-1b712bd266fc"). InnerVolumeSpecName "kube-api-access-xsrns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.762444 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fff3e916-b71f-44c3-a4ac-a78efc547a28-kube-api-access-5ztk7" (OuterVolumeSpecName: "kube-api-access-5ztk7") pod "fff3e916-b71f-44c3-a4ac-a78efc547a28" (UID: "fff3e916-b71f-44c3-a4ac-a78efc547a28"). InnerVolumeSpecName "kube-api-access-5ztk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.788450 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff3e916-b71f-44c3-a4ac-a78efc547a28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fff3e916-b71f-44c3-a4ac-a78efc547a28" (UID: "fff3e916-b71f-44c3-a4ac-a78efc547a28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.851270 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fff3e916-b71f-44c3-a4ac-a78efc547a28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.851309 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsrns\" (UniqueName: \"kubernetes.io/projected/7e0cdf1c-faf9-4a21-8beb-1b712bd266fc-kube-api-access-xsrns\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.851319 4825 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7e0cdf1c-faf9-4a21-8beb-1b712bd266fc-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.851328 4825 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fff3e916-b71f-44c3-a4ac-a78efc547a28-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.851338 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff3e916-b71f-44c3-a4ac-a78efc547a28-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:45 crc kubenswrapper[4825]: I0219 00:10:45.851347 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ztk7\" (UniqueName: \"kubernetes.io/projected/fff3e916-b71f-44c3-a4ac-a78efc547a28-kube-api-access-5ztk7\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:45 crc kubenswrapper[4825]: E0219 00:10:45.959537 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 00:10:45 crc kubenswrapper[4825]: E0219 00:10:45.959763 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6g69f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-zgq6r_openshift-marketplace(d7139b54-5e59-487a-bbf3-2ac657e5e39d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:10:45 crc kubenswrapper[4825]: E0219 00:10:45.960964 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-zgq6r" podUID="d7139b54-5e59-487a-bbf3-2ac657e5e39d" Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.028666 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bhnmw"] Feb 19 00:10:46 crc kubenswrapper[4825]: W0219 00:10:46.037653 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80aa664d_e111_41f6_815d_f4185e1f72ff.slice/crio-f2ce686298b9cd1014a89a219494bbb99dd538b4149c5ca76dd63a0c60b42adc WatchSource:0}: Error finding container f2ce686298b9cd1014a89a219494bbb99dd538b4149c5ca76dd63a0c60b42adc: Status 404 returned error can't find the container with id f2ce686298b9cd1014a89a219494bbb99dd538b4149c5ca76dd63a0c60b42adc Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.056020 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65779f4744-dlj22"] Feb 19 00:10:46 crc kubenswrapper[4825]: W0219 00:10:46.078075 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c8db180_cc65_4921_97d2_47ac7678b6c0.slice/crio-77cbc153f76236d7cb8d430d827635428517235ac9ceae78ac18cf34ca481f6f WatchSource:0}: Error finding container 77cbc153f76236d7cb8d430d827635428517235ac9ceae78ac18cf34ca481f6f: Status 404 returned error can't find the container with id 77cbc153f76236d7cb8d430d827635428517235ac9ceae78ac18cf34ca481f6f Feb 19 00:10:46 crc kubenswrapper[4825]: E0219 00:10:46.216864 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 00:10:46 crc kubenswrapper[4825]: E0219 00:10:46.217047 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gs2bq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-psl6h_openshift-marketplace(87f1d2aa-1887-4322-a053-12e950fa2250): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:10:46 crc kubenswrapper[4825]: E0219 00:10:46.218247 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-psl6h" podUID="87f1d2aa-1887-4322-a053-12e950fa2250" Feb 19 00:10:46 crc kubenswrapper[4825]: E0219 00:10:46.595067 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 00:10:46 crc kubenswrapper[4825]: E0219 00:10:46.595619 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fg4nd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8xs8g_openshift-marketplace(fe3e4f27-2ef4-4187-911b-135249a4454f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:10:46 crc kubenswrapper[4825]: E0219 00:10:46.596891 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8xs8g" podUID="fe3e4f27-2ef4-4187-911b-135249a4454f" Feb 19 00:10:46 crc kubenswrapper[4825]: E0219 00:10:46.626165 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 00:10:46 crc kubenswrapper[4825]: E0219 00:10:46.626388 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mx2m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zstqt_openshift-marketplace(1e372937-4e80-4153-bf75-7811efb6750b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:10:46 crc kubenswrapper[4825]: E0219 00:10:46.627572 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zstqt" podUID="1e372937-4e80-4153-bf75-7811efb6750b" Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.710597 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" event={"ID":"fff3e916-b71f-44c3-a4ac-a78efc547a28","Type":"ContainerDied","Data":"7d9382f10c20d16f8d16f470eb937adaf6184af8e9a7724e0a81c8e580d764b6"} Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.710676 4825 scope.go:117] "RemoveContainer" containerID="c1ef257889273d81d46af18b3db925a09e8995065bd63fbcaebf4958f15d9e9a" Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.710793 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn" Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.723107 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" event={"ID":"0c8db180-cc65-4921-97d2-47ac7678b6c0","Type":"ContainerStarted","Data":"b038c0d2adb8811afaca68f8df57f7e22e63d415164830a0969eb7e1c872e6ca"} Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.723180 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" event={"ID":"0c8db180-cc65-4921-97d2-47ac7678b6c0","Type":"ContainerStarted","Data":"77cbc153f76236d7cb8d430d827635428517235ac9ceae78ac18cf34ca481f6f"} Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.723307 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" podUID="0c8db180-cc65-4921-97d2-47ac7678b6c0" containerName="controller-manager" containerID="cri-o://b038c0d2adb8811afaca68f8df57f7e22e63d415164830a0969eb7e1c872e6ca" gracePeriod=30 Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.723675 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.738466 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.739065 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29524320-khn5f" event={"ID":"7e0cdf1c-faf9-4a21-8beb-1b712bd266fc","Type":"ContainerDied","Data":"97d08616e9be0202b076172ec8d13ac6f719ad54a18a896c48fbcba0e626b6e3"} Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.739109 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97d08616e9be0202b076172ec8d13ac6f719ad54a18a896c48fbcba0e626b6e3" Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.739071 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29524320-khn5f" Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.743434 4825 generic.go:334] "Generic (PLEG): container finished" podID="bee79dcb-9870-4633-9f5b-56d2177c2616" containerID="782c325f1a15f1654f72eb28dc14d3baf39eaf04b31af440d0bd647cfaf181e8" exitCode=0 Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.743541 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hl5qz" event={"ID":"bee79dcb-9870-4633-9f5b-56d2177c2616","Type":"ContainerDied","Data":"782c325f1a15f1654f72eb28dc14d3baf39eaf04b31af440d0bd647cfaf181e8"} Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.753725 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" event={"ID":"80aa664d-e111-41f6-815d-f4185e1f72ff","Type":"ContainerStarted","Data":"8c46cee9ee95b4bc889689d9249be2441e33dc7b6e505c9b3c58b5a8105d20f2"} Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.753776 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" event={"ID":"80aa664d-e111-41f6-815d-f4185e1f72ff","Type":"ContainerStarted","Data":"f2ce686298b9cd1014a89a219494bbb99dd538b4149c5ca76dd63a0c60b42adc"} Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.761128 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" podStartSLOduration=21.761112521 podStartE2EDuration="21.761112521s" podCreationTimestamp="2026-02-19 00:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:46.757976139 +0000 UTC m=+192.448942186" watchObservedRunningTime="2026-02-19 00:10:46.761112521 +0000 UTC m=+192.452078568" Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.763276 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bf56f" event={"ID":"29be03fe-da22-41a7-9243-67aa815fbfb1","Type":"ContainerStarted","Data":"0b82b238d41e9212309bb620fa064d42909d47c1d3e3c4cb8c583e643471a92d"} Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.764291 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-bf56f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.764342 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 00:10:46 crc kubenswrapper[4825]: E0219 00:10:46.765460 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-zgq6r" podUID="d7139b54-5e59-487a-bbf3-2ac657e5e39d" Feb 19 00:10:46 crc kubenswrapper[4825]: E0219 00:10:46.768099 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8xs8g" podUID="fe3e4f27-2ef4-4187-911b-135249a4454f" Feb 19 00:10:46 crc kubenswrapper[4825]: E0219 00:10:46.769011 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-psl6h" podUID="87f1d2aa-1887-4322-a053-12e950fa2250" Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.862302 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn"] Feb 19 00:10:46 crc kubenswrapper[4825]: I0219 00:10:46.865917 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-96ctn"] Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.080960 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fff3e916-b71f-44c3-a4ac-a78efc547a28" path="/var/lib/kubelet/pods/fff3e916-b71f-44c3-a4ac-a78efc547a28/volumes" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.117300 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.270831 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-746cc47688-xvzr9"] Feb 19 00:10:47 crc kubenswrapper[4825]: E0219 00:10:47.271770 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8db180-cc65-4921-97d2-47ac7678b6c0" containerName="controller-manager" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.271790 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8db180-cc65-4921-97d2-47ac7678b6c0" containerName="controller-manager" Feb 19 00:10:47 crc kubenswrapper[4825]: E0219 00:10:47.271807 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e0cdf1c-faf9-4a21-8beb-1b712bd266fc" containerName="image-pruner" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.271815 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0cdf1c-faf9-4a21-8beb-1b712bd266fc" containerName="image-pruner" Feb 19 00:10:47 crc kubenswrapper[4825]: E0219 00:10:47.271830 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff3e916-b71f-44c3-a4ac-a78efc547a28" containerName="route-controller-manager" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.271837 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff3e916-b71f-44c3-a4ac-a78efc547a28" containerName="route-controller-manager" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.271984 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8db180-cc65-4921-97d2-47ac7678b6c0" containerName="controller-manager" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.271996 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff3e916-b71f-44c3-a4ac-a78efc547a28" containerName="route-controller-manager" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.272009 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e0cdf1c-faf9-4a21-8beb-1b712bd266fc" containerName="image-pruner" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.272649 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.278346 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9"] Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.279761 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.282614 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-client-ca\") pod \"0c8db180-cc65-4921-97d2-47ac7678b6c0\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.282685 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-config\") pod \"0c8db180-cc65-4921-97d2-47ac7678b6c0\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.282745 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c8db180-cc65-4921-97d2-47ac7678b6c0-serving-cert\") pod \"0c8db180-cc65-4921-97d2-47ac7678b6c0\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.282781 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9gqr\" (UniqueName: \"kubernetes.io/projected/0c8db180-cc65-4921-97d2-47ac7678b6c0-kube-api-access-z9gqr\") pod \"0c8db180-cc65-4921-97d2-47ac7678b6c0\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.282892 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-proxy-ca-bundles\") pod \"0c8db180-cc65-4921-97d2-47ac7678b6c0\" (UID: \"0c8db180-cc65-4921-97d2-47ac7678b6c0\") " Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.284523 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-client-ca" (OuterVolumeSpecName: "client-ca") pod "0c8db180-cc65-4921-97d2-47ac7678b6c0" (UID: "0c8db180-cc65-4921-97d2-47ac7678b6c0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.284820 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-config" (OuterVolumeSpecName: "config") pod "0c8db180-cc65-4921-97d2-47ac7678b6c0" (UID: "0c8db180-cc65-4921-97d2-47ac7678b6c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.285694 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0c8db180-cc65-4921-97d2-47ac7678b6c0" (UID: "0c8db180-cc65-4921-97d2-47ac7678b6c0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.285913 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.286072 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.286162 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.286430 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.287961 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.288682 4825 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.288718 4825 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.288731 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c8db180-cc65-4921-97d2-47ac7678b6c0-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.294719 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.297100 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c8db180-cc65-4921-97d2-47ac7678b6c0-kube-api-access-z9gqr" (OuterVolumeSpecName: "kube-api-access-z9gqr") pod "0c8db180-cc65-4921-97d2-47ac7678b6c0" (UID: "0c8db180-cc65-4921-97d2-47ac7678b6c0"). InnerVolumeSpecName "kube-api-access-z9gqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.297555 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9"] Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.303950 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c8db180-cc65-4921-97d2-47ac7678b6c0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0c8db180-cc65-4921-97d2-47ac7678b6c0" (UID: "0c8db180-cc65-4921-97d2-47ac7678b6c0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.306374 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-746cc47688-xvzr9"] Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.390557 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-serving-cert\") pod \"controller-manager-746cc47688-xvzr9\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.391087 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-client-ca\") pod \"route-controller-manager-59c8486cfc-f6vc9\" (UID: \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\") " pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.391129 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-config\") pod \"controller-manager-746cc47688-xvzr9\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.391155 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dsd8\" (UniqueName: \"kubernetes.io/projected/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-kube-api-access-8dsd8\") pod \"controller-manager-746cc47688-xvzr9\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.391176 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-proxy-ca-bundles\") pod \"controller-manager-746cc47688-xvzr9\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.391196 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-client-ca\") pod \"controller-manager-746cc47688-xvzr9\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.391225 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-serving-cert\") pod \"route-controller-manager-59c8486cfc-f6vc9\" (UID: \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\") " pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.391262 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-config\") pod \"route-controller-manager-59c8486cfc-f6vc9\" (UID: \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\") " pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.391281 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtsq7\" (UniqueName: \"kubernetes.io/projected/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-kube-api-access-mtsq7\") pod \"route-controller-manager-59c8486cfc-f6vc9\" (UID: \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\") " pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.391333 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c8db180-cc65-4921-97d2-47ac7678b6c0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.391347 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9gqr\" (UniqueName: \"kubernetes.io/projected/0c8db180-cc65-4921-97d2-47ac7678b6c0-kube-api-access-z9gqr\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.492494 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-config\") pod \"controller-manager-746cc47688-xvzr9\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.492580 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dsd8\" (UniqueName: \"kubernetes.io/projected/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-kube-api-access-8dsd8\") pod \"controller-manager-746cc47688-xvzr9\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.492610 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-proxy-ca-bundles\") pod \"controller-manager-746cc47688-xvzr9\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.492632 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-client-ca\") pod \"controller-manager-746cc47688-xvzr9\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.492668 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-serving-cert\") pod \"route-controller-manager-59c8486cfc-f6vc9\" (UID: \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\") " pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.492707 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-config\") pod \"route-controller-manager-59c8486cfc-f6vc9\" (UID: \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\") " pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.492729 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtsq7\" (UniqueName: \"kubernetes.io/projected/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-kube-api-access-mtsq7\") pod \"route-controller-manager-59c8486cfc-f6vc9\" (UID: \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\") " pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.492762 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-serving-cert\") pod \"controller-manager-746cc47688-xvzr9\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.492780 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-client-ca\") pod \"route-controller-manager-59c8486cfc-f6vc9\" (UID: \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\") " pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.494181 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-client-ca\") pod \"controller-manager-746cc47688-xvzr9\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.494201 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-client-ca\") pod \"route-controller-manager-59c8486cfc-f6vc9\" (UID: \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\") " pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.494386 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-config\") pod \"route-controller-manager-59c8486cfc-f6vc9\" (UID: \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\") " pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.495222 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-config\") pod \"controller-manager-746cc47688-xvzr9\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.495260 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-proxy-ca-bundles\") pod \"controller-manager-746cc47688-xvzr9\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.499427 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-serving-cert\") pod \"controller-manager-746cc47688-xvzr9\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.508064 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-serving-cert\") pod \"route-controller-manager-59c8486cfc-f6vc9\" (UID: \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\") " pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.510185 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtsq7\" (UniqueName: \"kubernetes.io/projected/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-kube-api-access-mtsq7\") pod \"route-controller-manager-59c8486cfc-f6vc9\" (UID: \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\") " pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.510349 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dsd8\" (UniqueName: \"kubernetes.io/projected/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-kube-api-access-8dsd8\") pod \"controller-manager-746cc47688-xvzr9\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.598564 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.612958 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.773839 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hl5qz" event={"ID":"bee79dcb-9870-4633-9f5b-56d2177c2616","Type":"ContainerStarted","Data":"afe050f8a521a9841a00b3808af6fc95661f55fcf4e6ba8a0d8f4b12c15cb785"} Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.775672 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bhnmw" event={"ID":"80aa664d-e111-41f6-815d-f4185e1f72ff","Type":"ContainerStarted","Data":"d3029edd340461fa970bf914e82ec6047338f799cce0caf7338874b9adfb37ae"} Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.777596 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c8db180-cc65-4921-97d2-47ac7678b6c0" containerID="b038c0d2adb8811afaca68f8df57f7e22e63d415164830a0969eb7e1c872e6ca" exitCode=0 Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.777647 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.777650 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" event={"ID":"0c8db180-cc65-4921-97d2-47ac7678b6c0","Type":"ContainerDied","Data":"b038c0d2adb8811afaca68f8df57f7e22e63d415164830a0969eb7e1c872e6ca"} Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.777686 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65779f4744-dlj22" event={"ID":"0c8db180-cc65-4921-97d2-47ac7678b6c0","Type":"ContainerDied","Data":"77cbc153f76236d7cb8d430d827635428517235ac9ceae78ac18cf34ca481f6f"} Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.777707 4825 scope.go:117] "RemoveContainer" containerID="b038c0d2adb8811afaca68f8df57f7e22e63d415164830a0969eb7e1c872e6ca" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.778056 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bf56f" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.778343 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-bf56f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.778378 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.831813 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hl5qz" podStartSLOduration=3.376452103 podStartE2EDuration="41.83178379s" podCreationTimestamp="2026-02-19 00:10:06 +0000 UTC" firstStartedPulling="2026-02-19 00:10:08.722448676 +0000 UTC m=+154.413414733" lastFinishedPulling="2026-02-19 00:10:47.177780373 +0000 UTC m=+192.868746420" observedRunningTime="2026-02-19 00:10:47.791884249 +0000 UTC m=+193.482850296" watchObservedRunningTime="2026-02-19 00:10:47.83178379 +0000 UTC m=+193.522749847" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.833589 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bhnmw" podStartSLOduration=171.83300134 podStartE2EDuration="2m51.83300134s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:47.814488936 +0000 UTC m=+193.505455003" watchObservedRunningTime="2026-02-19 00:10:47.83300134 +0000 UTC m=+193.523967387" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.837047 4825 scope.go:117] "RemoveContainer" containerID="b038c0d2adb8811afaca68f8df57f7e22e63d415164830a0969eb7e1c872e6ca" Feb 19 00:10:47 crc kubenswrapper[4825]: E0219 00:10:47.839819 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b038c0d2adb8811afaca68f8df57f7e22e63d415164830a0969eb7e1c872e6ca\": container with ID starting with b038c0d2adb8811afaca68f8df57f7e22e63d415164830a0969eb7e1c872e6ca not found: ID does not exist" containerID="b038c0d2adb8811afaca68f8df57f7e22e63d415164830a0969eb7e1c872e6ca" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.839862 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b038c0d2adb8811afaca68f8df57f7e22e63d415164830a0969eb7e1c872e6ca"} err="failed to get container status \"b038c0d2adb8811afaca68f8df57f7e22e63d415164830a0969eb7e1c872e6ca\": rpc error: code = NotFound desc = could not find container \"b038c0d2adb8811afaca68f8df57f7e22e63d415164830a0969eb7e1c872e6ca\": container with ID starting with b038c0d2adb8811afaca68f8df57f7e22e63d415164830a0969eb7e1c872e6ca not found: ID does not exist" Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.840290 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65779f4744-dlj22"] Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.843906 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65779f4744-dlj22"] Feb 19 00:10:47 crc kubenswrapper[4825]: I0219 00:10:47.887724 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9"] Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.117093 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-746cc47688-xvzr9"] Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.374310 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.375365 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.377327 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.378442 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.393478 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.508031 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6062c1b8-ed9a-4f2d-930d-00ca27a90857-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6062c1b8-ed9a-4f2d-930d-00ca27a90857\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.508265 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6062c1b8-ed9a-4f2d-930d-00ca27a90857-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6062c1b8-ed9a-4f2d-930d-00ca27a90857\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.609069 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6062c1b8-ed9a-4f2d-930d-00ca27a90857-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6062c1b8-ed9a-4f2d-930d-00ca27a90857\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.610260 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6062c1b8-ed9a-4f2d-930d-00ca27a90857-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6062c1b8-ed9a-4f2d-930d-00ca27a90857\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.609255 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6062c1b8-ed9a-4f2d-930d-00ca27a90857-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6062c1b8-ed9a-4f2d-930d-00ca27a90857\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.629861 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6062c1b8-ed9a-4f2d-930d-00ca27a90857-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6062c1b8-ed9a-4f2d-930d-00ca27a90857\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.692793 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.784697 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" event={"ID":"4e0b57b1-f984-4766-a6e2-c5f94634f4f7","Type":"ContainerStarted","Data":"0e9550152fc86552b8b65eb25b56f52e9795fd80a18c7d4fbc25c397fa387147"} Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.784744 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" event={"ID":"4e0b57b1-f984-4766-a6e2-c5f94634f4f7","Type":"ContainerStarted","Data":"0e477ff7fa555a342808780b14dcb4ce644c44452f765652acf1c0ee241fd7b0"} Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.787313 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" event={"ID":"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323","Type":"ContainerStarted","Data":"0c73715acba405ed797d32061952aef01bb47ca6475296cee947cd3c68c9f5b6"} Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.788468 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-bf56f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.788522 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[4825]: I0219 00:10:48.964787 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 00:10:48 crc kubenswrapper[4825]: W0219 00:10:48.967968 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6062c1b8_ed9a_4f2d_930d_00ca27a90857.slice/crio-b644bdfa929dac8e929b26c4f7cd1424cff31a3980bdab994c37032a3d0477d8 WatchSource:0}: Error finding container b644bdfa929dac8e929b26c4f7cd1424cff31a3980bdab994c37032a3d0477d8: Status 404 returned error can't find the container with id b644bdfa929dac8e929b26c4f7cd1424cff31a3980bdab994c37032a3d0477d8 Feb 19 00:10:49 crc kubenswrapper[4825]: I0219 00:10:49.076261 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c8db180-cc65-4921-97d2-47ac7678b6c0" path="/var/lib/kubelet/pods/0c8db180-cc65-4921-97d2-47ac7678b6c0/volumes" Feb 19 00:10:49 crc kubenswrapper[4825]: I0219 00:10:49.799741 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6062c1b8-ed9a-4f2d-930d-00ca27a90857","Type":"ContainerStarted","Data":"e2ea52bb91247481763051801410d16d842c8a982b112859ee6f777260c53404"} Feb 19 00:10:49 crc kubenswrapper[4825]: I0219 00:10:49.800866 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6062c1b8-ed9a-4f2d-930d-00ca27a90857","Type":"ContainerStarted","Data":"b644bdfa929dac8e929b26c4f7cd1424cff31a3980bdab994c37032a3d0477d8"} Feb 19 00:10:49 crc kubenswrapper[4825]: I0219 00:10:49.803246 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" event={"ID":"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323","Type":"ContainerStarted","Data":"24b0f48e1c16a338fe4d9ce44710a5d084eca0061d9dcf813e78d7a5a7fa759e"} Feb 19 00:10:49 crc kubenswrapper[4825]: I0219 00:10:49.803720 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:49 crc kubenswrapper[4825]: I0219 00:10:49.803795 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:10:49 crc kubenswrapper[4825]: I0219 00:10:49.808952 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:10:49 crc kubenswrapper[4825]: I0219 00:10:49.821884 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:10:49 crc kubenswrapper[4825]: I0219 00:10:49.854571 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.854551361 podStartE2EDuration="1.854551361s" podCreationTimestamp="2026-02-19 00:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:49.819705464 +0000 UTC m=+195.510671501" watchObservedRunningTime="2026-02-19 00:10:49.854551361 +0000 UTC m=+195.545517418" Feb 19 00:10:49 crc kubenswrapper[4825]: I0219 00:10:49.886808 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" podStartSLOduration=4.886784852 podStartE2EDuration="4.886784852s" podCreationTimestamp="2026-02-19 00:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:49.859654678 +0000 UTC m=+195.550620735" watchObservedRunningTime="2026-02-19 00:10:49.886784852 +0000 UTC m=+195.577750899" Feb 19 00:10:49 crc kubenswrapper[4825]: I0219 00:10:49.921773 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" podStartSLOduration=4.921752414 podStartE2EDuration="4.921752414s" podCreationTimestamp="2026-02-19 00:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:49.891639211 +0000 UTC m=+195.582605268" watchObservedRunningTime="2026-02-19 00:10:49.921752414 +0000 UTC m=+195.612718451" Feb 19 00:10:50 crc kubenswrapper[4825]: I0219 00:10:50.809189 4825 generic.go:334] "Generic (PLEG): container finished" podID="6062c1b8-ed9a-4f2d-930d-00ca27a90857" containerID="e2ea52bb91247481763051801410d16d842c8a982b112859ee6f777260c53404" exitCode=0 Feb 19 00:10:50 crc kubenswrapper[4825]: I0219 00:10:50.809252 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6062c1b8-ed9a-4f2d-930d-00ca27a90857","Type":"ContainerDied","Data":"e2ea52bb91247481763051801410d16d842c8a982b112859ee6f777260c53404"} Feb 19 00:10:52 crc kubenswrapper[4825]: I0219 00:10:52.101098 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:10:52 crc kubenswrapper[4825]: I0219 00:10:52.112204 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6062c1b8-ed9a-4f2d-930d-00ca27a90857-kubelet-dir\") pod \"6062c1b8-ed9a-4f2d-930d-00ca27a90857\" (UID: \"6062c1b8-ed9a-4f2d-930d-00ca27a90857\") " Feb 19 00:10:52 crc kubenswrapper[4825]: I0219 00:10:52.112443 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6062c1b8-ed9a-4f2d-930d-00ca27a90857-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6062c1b8-ed9a-4f2d-930d-00ca27a90857" (UID: "6062c1b8-ed9a-4f2d-930d-00ca27a90857"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:10:52 crc kubenswrapper[4825]: I0219 00:10:52.213479 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6062c1b8-ed9a-4f2d-930d-00ca27a90857-kube-api-access\") pod \"6062c1b8-ed9a-4f2d-930d-00ca27a90857\" (UID: \"6062c1b8-ed9a-4f2d-930d-00ca27a90857\") " Feb 19 00:10:52 crc kubenswrapper[4825]: I0219 00:10:52.213864 4825 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6062c1b8-ed9a-4f2d-930d-00ca27a90857-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:52 crc kubenswrapper[4825]: I0219 00:10:52.222122 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6062c1b8-ed9a-4f2d-930d-00ca27a90857-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6062c1b8-ed9a-4f2d-930d-00ca27a90857" (UID: "6062c1b8-ed9a-4f2d-930d-00ca27a90857"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:10:52 crc kubenswrapper[4825]: I0219 00:10:52.315946 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6062c1b8-ed9a-4f2d-930d-00ca27a90857-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:52 crc kubenswrapper[4825]: I0219 00:10:52.825746 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6062c1b8-ed9a-4f2d-930d-00ca27a90857","Type":"ContainerDied","Data":"b644bdfa929dac8e929b26c4f7cd1424cff31a3980bdab994c37032a3d0477d8"} Feb 19 00:10:52 crc kubenswrapper[4825]: I0219 00:10:52.825817 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b644bdfa929dac8e929b26c4f7cd1424cff31a3980bdab994c37032a3d0477d8" Feb 19 00:10:52 crc kubenswrapper[4825]: I0219 00:10:52.825929 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:10:55 crc kubenswrapper[4825]: I0219 00:10:55.040926 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-bf56f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 00:10:55 crc kubenswrapper[4825]: I0219 00:10:55.041231 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 00:10:55 crc kubenswrapper[4825]: I0219 00:10:55.041614 4825 patch_prober.go:28] interesting pod/downloads-7954f5f757-bf56f container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Feb 19 00:10:55 crc kubenswrapper[4825]: I0219 00:10:55.041640 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bf56f" podUID="29be03fe-da22-41a7-9243-67aa815fbfb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Feb 19 00:10:55 crc kubenswrapper[4825]: I0219 00:10:55.998995 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 00:10:55 crc kubenswrapper[4825]: E0219 00:10:55.999563 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6062c1b8-ed9a-4f2d-930d-00ca27a90857" containerName="pruner" Feb 19 00:10:55 crc kubenswrapper[4825]: I0219 00:10:55.999595 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="6062c1b8-ed9a-4f2d-930d-00ca27a90857" containerName="pruner" Feb 19 00:10:55 crc kubenswrapper[4825]: I0219 00:10:55.999928 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="6062c1b8-ed9a-4f2d-930d-00ca27a90857" containerName="pruner" Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.000874 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.003207 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.003674 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.003987 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.065351 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-var-lock\") pod \"installer-9-crc\" (UID: \"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.065904 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.065947 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-kube-api-access\") pod \"installer-9-crc\" (UID: \"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.166738 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.166792 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-kube-api-access\") pod \"installer-9-crc\" (UID: \"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.166937 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-var-lock\") pod \"installer-9-crc\" (UID: \"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.167029 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-var-lock\") pod \"installer-9-crc\" (UID: \"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.167258 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.199829 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-kube-api-access\") pod \"installer-9-crc\" (UID: \"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.331101 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.869126 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.993915 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:10:56 crc kubenswrapper[4825]: I0219 00:10:56.993990 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:10:57 crc kubenswrapper[4825]: I0219 00:10:57.857531 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a","Type":"ContainerStarted","Data":"870c524419b7b9217f39b8e9def94fc29887b908a20531c5674a96e6995eabe3"} Feb 19 00:10:57 crc kubenswrapper[4825]: I0219 00:10:57.915066 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:10:57 crc kubenswrapper[4825]: I0219 00:10:57.977372 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:10:58 crc kubenswrapper[4825]: I0219 00:10:58.509762 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hl5qz"] Feb 19 00:10:58 crc kubenswrapper[4825]: I0219 00:10:58.823536 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:10:58 crc kubenswrapper[4825]: I0219 00:10:58.823964 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:10:58 crc kubenswrapper[4825]: I0219 00:10:58.824027 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:10:58 crc kubenswrapper[4825]: I0219 00:10:58.824688 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e"} pod="openshift-machine-config-operator/machine-config-daemon-tggq9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:10:58 crc kubenswrapper[4825]: I0219 00:10:58.824755 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" containerID="cri-o://e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e" gracePeriod=600 Feb 19 00:10:58 crc kubenswrapper[4825]: I0219 00:10:58.865558 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a","Type":"ContainerStarted","Data":"07129c57899d6464161c66feaa33ee2c13c00f96816fa41cd272fdc97f441b55"} Feb 19 00:10:58 crc kubenswrapper[4825]: I0219 00:10:58.869073 4825 generic.go:334] "Generic (PLEG): container finished" podID="5430045e-a57c-4dd3-8205-737c277afd00" containerID="6396cfaf1add662e4b8c55893c8081d9fd341171c5e41103d556652b9705777a" exitCode=0 Feb 19 00:10:58 crc kubenswrapper[4825]: I0219 00:10:58.869414 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g8mt" event={"ID":"5430045e-a57c-4dd3-8205-737c277afd00","Type":"ContainerDied","Data":"6396cfaf1add662e4b8c55893c8081d9fd341171c5e41103d556652b9705777a"} Feb 19 00:10:58 crc kubenswrapper[4825]: I0219 00:10:58.888482 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.888462379 podStartE2EDuration="3.888462379s" podCreationTimestamp="2026-02-19 00:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:58.885218633 +0000 UTC m=+204.576184680" watchObservedRunningTime="2026-02-19 00:10:58.888462379 +0000 UTC m=+204.579428416" Feb 19 00:10:59 crc kubenswrapper[4825]: I0219 00:10:59.877731 4825 generic.go:334] "Generic (PLEG): container finished" podID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerID="e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e" exitCode=0 Feb 19 00:10:59 crc kubenswrapper[4825]: I0219 00:10:59.877872 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerDied","Data":"e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e"} Feb 19 00:10:59 crc kubenswrapper[4825]: I0219 00:10:59.878038 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hl5qz" podUID="bee79dcb-9870-4633-9f5b-56d2177c2616" containerName="registry-server" containerID="cri-o://afe050f8a521a9841a00b3808af6fc95661f55fcf4e6ba8a0d8f4b12c15cb785" gracePeriod=2 Feb 19 00:11:00 crc kubenswrapper[4825]: I0219 00:11:00.885520 4825 generic.go:334] "Generic (PLEG): container finished" podID="bee79dcb-9870-4633-9f5b-56d2177c2616" containerID="afe050f8a521a9841a00b3808af6fc95661f55fcf4e6ba8a0d8f4b12c15cb785" exitCode=0 Feb 19 00:11:00 crc kubenswrapper[4825]: I0219 00:11:00.885543 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hl5qz" event={"ID":"bee79dcb-9870-4633-9f5b-56d2177c2616","Type":"ContainerDied","Data":"afe050f8a521a9841a00b3808af6fc95661f55fcf4e6ba8a0d8f4b12c15cb785"} Feb 19 00:11:00 crc kubenswrapper[4825]: I0219 00:11:00.960211 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.092464 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpbsh\" (UniqueName: \"kubernetes.io/projected/bee79dcb-9870-4633-9f5b-56d2177c2616-kube-api-access-wpbsh\") pod \"bee79dcb-9870-4633-9f5b-56d2177c2616\" (UID: \"bee79dcb-9870-4633-9f5b-56d2177c2616\") " Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.092518 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bee79dcb-9870-4633-9f5b-56d2177c2616-utilities\") pod \"bee79dcb-9870-4633-9f5b-56d2177c2616\" (UID: \"bee79dcb-9870-4633-9f5b-56d2177c2616\") " Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.092646 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bee79dcb-9870-4633-9f5b-56d2177c2616-catalog-content\") pod \"bee79dcb-9870-4633-9f5b-56d2177c2616\" (UID: \"bee79dcb-9870-4633-9f5b-56d2177c2616\") " Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.097951 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bee79dcb-9870-4633-9f5b-56d2177c2616-utilities" (OuterVolumeSpecName: "utilities") pod "bee79dcb-9870-4633-9f5b-56d2177c2616" (UID: "bee79dcb-9870-4633-9f5b-56d2177c2616"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.103415 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee79dcb-9870-4633-9f5b-56d2177c2616-kube-api-access-wpbsh" (OuterVolumeSpecName: "kube-api-access-wpbsh") pod "bee79dcb-9870-4633-9f5b-56d2177c2616" (UID: "bee79dcb-9870-4633-9f5b-56d2177c2616"). InnerVolumeSpecName "kube-api-access-wpbsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.146695 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bee79dcb-9870-4633-9f5b-56d2177c2616-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bee79dcb-9870-4633-9f5b-56d2177c2616" (UID: "bee79dcb-9870-4633-9f5b-56d2177c2616"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.194610 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bee79dcb-9870-4633-9f5b-56d2177c2616-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.194647 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpbsh\" (UniqueName: \"kubernetes.io/projected/bee79dcb-9870-4633-9f5b-56d2177c2616-kube-api-access-wpbsh\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.194659 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bee79dcb-9870-4633-9f5b-56d2177c2616-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.891581 4825 generic.go:334] "Generic (PLEG): container finished" podID="d7139b54-5e59-487a-bbf3-2ac657e5e39d" containerID="8dbf92b3b5b467be6a49aebd1b232ff539cb6f3e3720b1290326cf4432a64be4" exitCode=0 Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.893242 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgq6r" event={"ID":"d7139b54-5e59-487a-bbf3-2ac657e5e39d","Type":"ContainerDied","Data":"8dbf92b3b5b467be6a49aebd1b232ff539cb6f3e3720b1290326cf4432a64be4"} Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.900024 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerStarted","Data":"53baafc0a5b9c1c224604d4119e0e92aee7172ade69be62b2ef5640b6546ae02"} Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.903942 4825 generic.go:334] "Generic (PLEG): container finished" podID="fe3e4f27-2ef4-4187-911b-135249a4454f" containerID="1561bde9a019ad75990b3e3e9356542b9f7b39c928e474658a77a9e608ae4c02" exitCode=0 Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.904084 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xs8g" event={"ID":"fe3e4f27-2ef4-4187-911b-135249a4454f","Type":"ContainerDied","Data":"1561bde9a019ad75990b3e3e9356542b9f7b39c928e474658a77a9e608ae4c02"} Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.906062 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hl5qz" event={"ID":"bee79dcb-9870-4633-9f5b-56d2177c2616","Type":"ContainerDied","Data":"1df127faf400a4d5a02dba3c2b5d71d01b3992e930c1b84e2e572da1f6388c6e"} Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.906208 4825 scope.go:117] "RemoveContainer" containerID="afe050f8a521a9841a00b3808af6fc95661f55fcf4e6ba8a0d8f4b12c15cb785" Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.906488 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hl5qz" Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.970699 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hl5qz"] Feb 19 00:11:01 crc kubenswrapper[4825]: I0219 00:11:01.974847 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hl5qz"] Feb 19 00:11:02 crc kubenswrapper[4825]: I0219 00:11:02.742947 4825 scope.go:117] "RemoveContainer" containerID="782c325f1a15f1654f72eb28dc14d3baf39eaf04b31af440d0bd647cfaf181e8" Feb 19 00:11:02 crc kubenswrapper[4825]: I0219 00:11:02.777110 4825 scope.go:117] "RemoveContainer" containerID="92e9965e71f6d3f7c61249380fc85723ef1819a53ba75ba29746187a910df623" Feb 19 00:11:03 crc kubenswrapper[4825]: I0219 00:11:03.073573 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bee79dcb-9870-4633-9f5b-56d2177c2616" path="/var/lib/kubelet/pods/bee79dcb-9870-4633-9f5b-56d2177c2616/volumes" Feb 19 00:11:03 crc kubenswrapper[4825]: I0219 00:11:03.920559 4825 generic.go:334] "Generic (PLEG): container finished" podID="ecd4f41a-e074-49eb-950d-44a9a4140304" containerID="591e05230f73721796c0665bff47226bc7904e431acf0aec5ebbed4739fb6f94" exitCode=0 Feb 19 00:11:03 crc kubenswrapper[4825]: I0219 00:11:03.920640 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptklb" event={"ID":"ecd4f41a-e074-49eb-950d-44a9a4140304","Type":"ContainerDied","Data":"591e05230f73721796c0665bff47226bc7904e431acf0aec5ebbed4739fb6f94"} Feb 19 00:11:03 crc kubenswrapper[4825]: I0219 00:11:03.925074 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp65c" event={"ID":"3f9f8ad3-8653-427d-ad82-6b0157a57827","Type":"ContainerStarted","Data":"fd24d514997210b01dcd7c66e095acda223ceef44982b980f9b712fc16296f9e"} Feb 19 00:11:03 crc kubenswrapper[4825]: I0219 00:11:03.928384 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zstqt" event={"ID":"1e372937-4e80-4153-bf75-7811efb6750b","Type":"ContainerStarted","Data":"309f82afc47a670752eed0317c295282f4b7719b5e63bc0c01ae1aa3572e4fc8"} Feb 19 00:11:03 crc kubenswrapper[4825]: I0219 00:11:03.930275 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g8mt" event={"ID":"5430045e-a57c-4dd3-8205-737c277afd00","Type":"ContainerStarted","Data":"bca9cf90cc73bbeff04e933b7b9fbf3f890245e994333e7589e12815c67ad093"} Feb 19 00:11:03 crc kubenswrapper[4825]: I0219 00:11:03.980637 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7g8mt" podStartSLOduration=5.130847893 podStartE2EDuration="56.980614656s" podCreationTimestamp="2026-02-19 00:10:07 +0000 UTC" firstStartedPulling="2026-02-19 00:10:10.893759623 +0000 UTC m=+156.584725670" lastFinishedPulling="2026-02-19 00:11:02.743526386 +0000 UTC m=+208.434492433" observedRunningTime="2026-02-19 00:11:03.96216508 +0000 UTC m=+209.653131127" watchObservedRunningTime="2026-02-19 00:11:03.980614656 +0000 UTC m=+209.671580703" Feb 19 00:11:04 crc kubenswrapper[4825]: I0219 00:11:04.936622 4825 generic.go:334] "Generic (PLEG): container finished" podID="1e372937-4e80-4153-bf75-7811efb6750b" containerID="309f82afc47a670752eed0317c295282f4b7719b5e63bc0c01ae1aa3572e4fc8" exitCode=0 Feb 19 00:11:04 crc kubenswrapper[4825]: I0219 00:11:04.936713 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zstqt" event={"ID":"1e372937-4e80-4153-bf75-7811efb6750b","Type":"ContainerDied","Data":"309f82afc47a670752eed0317c295282f4b7719b5e63bc0c01ae1aa3572e4fc8"} Feb 19 00:11:04 crc kubenswrapper[4825]: I0219 00:11:04.939700 4825 generic.go:334] "Generic (PLEG): container finished" podID="3f9f8ad3-8653-427d-ad82-6b0157a57827" containerID="fd24d514997210b01dcd7c66e095acda223ceef44982b980f9b712fc16296f9e" exitCode=0 Feb 19 00:11:04 crc kubenswrapper[4825]: I0219 00:11:04.939747 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp65c" event={"ID":"3f9f8ad3-8653-427d-ad82-6b0157a57827","Type":"ContainerDied","Data":"fd24d514997210b01dcd7c66e095acda223ceef44982b980f9b712fc16296f9e"} Feb 19 00:11:05 crc kubenswrapper[4825]: I0219 00:11:05.081171 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bf56f" Feb 19 00:11:05 crc kubenswrapper[4825]: I0219 00:11:05.493267 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-746cc47688-xvzr9"] Feb 19 00:11:05 crc kubenswrapper[4825]: I0219 00:11:05.493479 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" podUID="ae5f1fe3-71a2-4edc-8f7f-17e693d1e323" containerName="controller-manager" containerID="cri-o://24b0f48e1c16a338fe4d9ce44710a5d084eca0061d9dcf813e78d7a5a7fa759e" gracePeriod=30 Feb 19 00:11:05 crc kubenswrapper[4825]: I0219 00:11:05.506622 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9"] Feb 19 00:11:05 crc kubenswrapper[4825]: I0219 00:11:05.506869 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" podUID="4e0b57b1-f984-4766-a6e2-c5f94634f4f7" containerName="route-controller-manager" containerID="cri-o://0e9550152fc86552b8b65eb25b56f52e9795fd80a18c7d4fbc25c397fa387147" gracePeriod=30 Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.923162 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.959989 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x"] Feb 19 00:11:06 crc kubenswrapper[4825]: E0219 00:11:06.960235 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee79dcb-9870-4633-9f5b-56d2177c2616" containerName="extract-content" Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.960249 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee79dcb-9870-4633-9f5b-56d2177c2616" containerName="extract-content" Feb 19 00:11:06 crc kubenswrapper[4825]: E0219 00:11:06.960259 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0b57b1-f984-4766-a6e2-c5f94634f4f7" containerName="route-controller-manager" Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.960266 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0b57b1-f984-4766-a6e2-c5f94634f4f7" containerName="route-controller-manager" Feb 19 00:11:06 crc kubenswrapper[4825]: E0219 00:11:06.960278 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee79dcb-9870-4633-9f5b-56d2177c2616" containerName="registry-server" Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.960288 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee79dcb-9870-4633-9f5b-56d2177c2616" containerName="registry-server" Feb 19 00:11:06 crc kubenswrapper[4825]: E0219 00:11:06.960300 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee79dcb-9870-4633-9f5b-56d2177c2616" containerName="extract-utilities" Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.960307 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee79dcb-9870-4633-9f5b-56d2177c2616" containerName="extract-utilities" Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.960428 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0b57b1-f984-4766-a6e2-c5f94634f4f7" containerName="route-controller-manager" Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.960445 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee79dcb-9870-4633-9f5b-56d2177c2616" containerName="registry-server" Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.960836 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.972635 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x"] Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.990664 4825 generic.go:334] "Generic (PLEG): container finished" podID="ae5f1fe3-71a2-4edc-8f7f-17e693d1e323" containerID="24b0f48e1c16a338fe4d9ce44710a5d084eca0061d9dcf813e78d7a5a7fa759e" exitCode=0 Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.990804 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" event={"ID":"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323","Type":"ContainerDied","Data":"24b0f48e1c16a338fe4d9ce44710a5d084eca0061d9dcf813e78d7a5a7fa759e"} Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.995248 4825 generic.go:334] "Generic (PLEG): container finished" podID="4e0b57b1-f984-4766-a6e2-c5f94634f4f7" containerID="0e9550152fc86552b8b65eb25b56f52e9795fd80a18c7d4fbc25c397fa387147" exitCode=0 Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.995307 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.995308 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" event={"ID":"4e0b57b1-f984-4766-a6e2-c5f94634f4f7","Type":"ContainerDied","Data":"0e9550152fc86552b8b65eb25b56f52e9795fd80a18c7d4fbc25c397fa387147"} Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.995346 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9" event={"ID":"4e0b57b1-f984-4766-a6e2-c5f94634f4f7","Type":"ContainerDied","Data":"0e477ff7fa555a342808780b14dcb4ce644c44452f765652acf1c0ee241fd7b0"} Feb 19 00:11:06 crc kubenswrapper[4825]: I0219 00:11:06.995385 4825 scope.go:117] "RemoveContainer" containerID="0e9550152fc86552b8b65eb25b56f52e9795fd80a18c7d4fbc25c397fa387147" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.016195 4825 scope.go:117] "RemoveContainer" containerID="0e9550152fc86552b8b65eb25b56f52e9795fd80a18c7d4fbc25c397fa387147" Feb 19 00:11:07 crc kubenswrapper[4825]: E0219 00:11:07.017143 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e9550152fc86552b8b65eb25b56f52e9795fd80a18c7d4fbc25c397fa387147\": container with ID starting with 0e9550152fc86552b8b65eb25b56f52e9795fd80a18c7d4fbc25c397fa387147 not found: ID does not exist" containerID="0e9550152fc86552b8b65eb25b56f52e9795fd80a18c7d4fbc25c397fa387147" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.017215 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e9550152fc86552b8b65eb25b56f52e9795fd80a18c7d4fbc25c397fa387147"} err="failed to get container status \"0e9550152fc86552b8b65eb25b56f52e9795fd80a18c7d4fbc25c397fa387147\": rpc error: code = NotFound desc = could not find container \"0e9550152fc86552b8b65eb25b56f52e9795fd80a18c7d4fbc25c397fa387147\": container with ID starting with 0e9550152fc86552b8b65eb25b56f52e9795fd80a18c7d4fbc25c397fa387147 not found: ID does not exist" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.017844 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-client-ca\") pod \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\" (UID: \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\") " Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.017887 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-config\") pod \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\" (UID: \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\") " Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.017965 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-serving-cert\") pod \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\" (UID: \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\") " Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.018001 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtsq7\" (UniqueName: \"kubernetes.io/projected/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-kube-api-access-mtsq7\") pod \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\" (UID: \"4e0b57b1-f984-4766-a6e2-c5f94634f4f7\") " Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.020008 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-client-ca" (OuterVolumeSpecName: "client-ca") pod "4e0b57b1-f984-4766-a6e2-c5f94634f4f7" (UID: "4e0b57b1-f984-4766-a6e2-c5f94634f4f7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.020188 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-config" (OuterVolumeSpecName: "config") pod "4e0b57b1-f984-4766-a6e2-c5f94634f4f7" (UID: "4e0b57b1-f984-4766-a6e2-c5f94634f4f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.027164 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-kube-api-access-mtsq7" (OuterVolumeSpecName: "kube-api-access-mtsq7") pod "4e0b57b1-f984-4766-a6e2-c5f94634f4f7" (UID: "4e0b57b1-f984-4766-a6e2-c5f94634f4f7"). InnerVolumeSpecName "kube-api-access-mtsq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.036580 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4e0b57b1-f984-4766-a6e2-c5f94634f4f7" (UID: "4e0b57b1-f984-4766-a6e2-c5f94634f4f7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:07 crc kubenswrapper[4825]: E0219 00:11:07.115114 4825 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e0b57b1_f984_4766_a6e2_c5f94634f4f7.slice/crio-0e477ff7fa555a342808780b14dcb4ce644c44452f765652acf1c0ee241fd7b0\": RecentStats: unable to find data in memory cache]" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.119928 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v92t\" (UniqueName: \"kubernetes.io/projected/f1196222-ab8c-400c-9c0c-2649d6292f53-kube-api-access-7v92t\") pod \"route-controller-manager-75d5f76c4f-6cs7x\" (UID: \"f1196222-ab8c-400c-9c0c-2649d6292f53\") " pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.120389 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1196222-ab8c-400c-9c0c-2649d6292f53-config\") pod \"route-controller-manager-75d5f76c4f-6cs7x\" (UID: \"f1196222-ab8c-400c-9c0c-2649d6292f53\") " pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.120560 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1196222-ab8c-400c-9c0c-2649d6292f53-client-ca\") pod \"route-controller-manager-75d5f76c4f-6cs7x\" (UID: \"f1196222-ab8c-400c-9c0c-2649d6292f53\") " pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.120698 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1196222-ab8c-400c-9c0c-2649d6292f53-serving-cert\") pod \"route-controller-manager-75d5f76c4f-6cs7x\" (UID: \"f1196222-ab8c-400c-9c0c-2649d6292f53\") " pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.120870 4825 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.120948 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.121018 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.121082 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtsq7\" (UniqueName: \"kubernetes.io/projected/4e0b57b1-f984-4766-a6e2-c5f94634f4f7-kube-api-access-mtsq7\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.138478 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.223039 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-client-ca" (OuterVolumeSpecName: "client-ca") pod "ae5f1fe3-71a2-4edc-8f7f-17e693d1e323" (UID: "ae5f1fe3-71a2-4edc-8f7f-17e693d1e323"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.223659 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-client-ca\") pod \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.223849 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-proxy-ca-bundles\") pod \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.223983 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dsd8\" (UniqueName: \"kubernetes.io/projected/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-kube-api-access-8dsd8\") pod \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.224359 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ae5f1fe3-71a2-4edc-8f7f-17e693d1e323" (UID: "ae5f1fe3-71a2-4edc-8f7f-17e693d1e323"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.225105 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-config\") pod \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.225599 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-config" (OuterVolumeSpecName: "config") pod "ae5f1fe3-71a2-4edc-8f7f-17e693d1e323" (UID: "ae5f1fe3-71a2-4edc-8f7f-17e693d1e323"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.225952 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-serving-cert\") pod \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\" (UID: \"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323\") " Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.226381 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1196222-ab8c-400c-9c0c-2649d6292f53-serving-cert\") pod \"route-controller-manager-75d5f76c4f-6cs7x\" (UID: \"f1196222-ab8c-400c-9c0c-2649d6292f53\") " pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.226653 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v92t\" (UniqueName: \"kubernetes.io/projected/f1196222-ab8c-400c-9c0c-2649d6292f53-kube-api-access-7v92t\") pod \"route-controller-manager-75d5f76c4f-6cs7x\" (UID: \"f1196222-ab8c-400c-9c0c-2649d6292f53\") " pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.226843 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1196222-ab8c-400c-9c0c-2649d6292f53-config\") pod \"route-controller-manager-75d5f76c4f-6cs7x\" (UID: \"f1196222-ab8c-400c-9c0c-2649d6292f53\") " pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.226981 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1196222-ab8c-400c-9c0c-2649d6292f53-client-ca\") pod \"route-controller-manager-75d5f76c4f-6cs7x\" (UID: \"f1196222-ab8c-400c-9c0c-2649d6292f53\") " pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.227143 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.227256 4825 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.227348 4825 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.228519 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1196222-ab8c-400c-9c0c-2649d6292f53-client-ca\") pod \"route-controller-manager-75d5f76c4f-6cs7x\" (UID: \"f1196222-ab8c-400c-9c0c-2649d6292f53\") " pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.228775 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1196222-ab8c-400c-9c0c-2649d6292f53-config\") pod \"route-controller-manager-75d5f76c4f-6cs7x\" (UID: \"f1196222-ab8c-400c-9c0c-2649d6292f53\") " pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.230440 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1196222-ab8c-400c-9c0c-2649d6292f53-serving-cert\") pod \"route-controller-manager-75d5f76c4f-6cs7x\" (UID: \"f1196222-ab8c-400c-9c0c-2649d6292f53\") " pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.232316 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-kube-api-access-8dsd8" (OuterVolumeSpecName: "kube-api-access-8dsd8") pod "ae5f1fe3-71a2-4edc-8f7f-17e693d1e323" (UID: "ae5f1fe3-71a2-4edc-8f7f-17e693d1e323"). InnerVolumeSpecName "kube-api-access-8dsd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.236721 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ae5f1fe3-71a2-4edc-8f7f-17e693d1e323" (UID: "ae5f1fe3-71a2-4edc-8f7f-17e693d1e323"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.245460 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v92t\" (UniqueName: \"kubernetes.io/projected/f1196222-ab8c-400c-9c0c-2649d6292f53-kube-api-access-7v92t\") pod \"route-controller-manager-75d5f76c4f-6cs7x\" (UID: \"f1196222-ab8c-400c-9c0c-2649d6292f53\") " pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.292377 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.318053 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9"] Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.321224 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59c8486cfc-f6vc9"] Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.329625 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dsd8\" (UniqueName: \"kubernetes.io/projected/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-kube-api-access-8dsd8\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:07 crc kubenswrapper[4825]: I0219 00:11:07.329910 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:08 crc kubenswrapper[4825]: I0219 00:11:08.004360 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xs8g" event={"ID":"fe3e4f27-2ef4-4187-911b-135249a4454f","Type":"ContainerStarted","Data":"f9189a922a11ba5a6f9a2027b230e4be05beccaf2944046b21010c2d1015046b"} Feb 19 00:11:08 crc kubenswrapper[4825]: I0219 00:11:08.007374 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" event={"ID":"ae5f1fe3-71a2-4edc-8f7f-17e693d1e323","Type":"ContainerDied","Data":"0c73715acba405ed797d32061952aef01bb47ca6475296cee947cd3c68c9f5b6"} Feb 19 00:11:08 crc kubenswrapper[4825]: I0219 00:11:08.007471 4825 scope.go:117] "RemoveContainer" containerID="24b0f48e1c16a338fe4d9ce44710a5d084eca0061d9dcf813e78d7a5a7fa759e" Feb 19 00:11:08 crc kubenswrapper[4825]: I0219 00:11:08.007690 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-746cc47688-xvzr9" Feb 19 00:11:08 crc kubenswrapper[4825]: I0219 00:11:08.039124 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-746cc47688-xvzr9"] Feb 19 00:11:08 crc kubenswrapper[4825]: I0219 00:11:08.042841 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-746cc47688-xvzr9"] Feb 19 00:11:08 crc kubenswrapper[4825]: I0219 00:11:08.275362 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:11:08 crc kubenswrapper[4825]: I0219 00:11:08.275431 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:11:08 crc kubenswrapper[4825]: I0219 00:11:08.322444 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.047750 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8xs8g" podStartSLOduration=3.927672152 podStartE2EDuration="1m3.047717994s" podCreationTimestamp="2026-02-19 00:10:06 +0000 UTC" firstStartedPulling="2026-02-19 00:10:07.699360499 +0000 UTC m=+153.390326546" lastFinishedPulling="2026-02-19 00:11:06.819406341 +0000 UTC m=+212.510372388" observedRunningTime="2026-02-19 00:11:09.043089416 +0000 UTC m=+214.734055463" watchObservedRunningTime="2026-02-19 00:11:09.047717994 +0000 UTC m=+214.738684051" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.076848 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e0b57b1-f984-4766-a6e2-c5f94634f4f7" path="/var/lib/kubelet/pods/4e0b57b1-f984-4766-a6e2-c5f94634f4f7/volumes" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.078134 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae5f1fe3-71a2-4edc-8f7f-17e693d1e323" path="/var/lib/kubelet/pods/ae5f1fe3-71a2-4edc-8f7f-17e693d1e323/volumes" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.078814 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.315560 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5897d57d49-thm2s"] Feb 19 00:11:09 crc kubenswrapper[4825]: E0219 00:11:09.316070 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5f1fe3-71a2-4edc-8f7f-17e693d1e323" containerName="controller-manager" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.316097 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5f1fe3-71a2-4edc-8f7f-17e693d1e323" containerName="controller-manager" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.316340 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae5f1fe3-71a2-4edc-8f7f-17e693d1e323" containerName="controller-manager" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.317379 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.320266 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.320995 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.321103 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.321004 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.321310 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.322427 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.328009 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5897d57d49-thm2s"] Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.328143 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.463299 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-config\") pod \"controller-manager-5897d57d49-thm2s\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.463398 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-client-ca\") pod \"controller-manager-5897d57d49-thm2s\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.463570 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76746569-7b81-4132-8609-fee1ea8bb9dc-serving-cert\") pod \"controller-manager-5897d57d49-thm2s\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.463676 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-proxy-ca-bundles\") pod \"controller-manager-5897d57d49-thm2s\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.464073 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zngv\" (UniqueName: \"kubernetes.io/projected/76746569-7b81-4132-8609-fee1ea8bb9dc-kube-api-access-4zngv\") pod \"controller-manager-5897d57d49-thm2s\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.565729 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-config\") pod \"controller-manager-5897d57d49-thm2s\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.565782 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-client-ca\") pod \"controller-manager-5897d57d49-thm2s\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.567061 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-client-ca\") pod \"controller-manager-5897d57d49-thm2s\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.567117 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76746569-7b81-4132-8609-fee1ea8bb9dc-serving-cert\") pod \"controller-manager-5897d57d49-thm2s\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.567181 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-proxy-ca-bundles\") pod \"controller-manager-5897d57d49-thm2s\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.567265 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zngv\" (UniqueName: \"kubernetes.io/projected/76746569-7b81-4132-8609-fee1ea8bb9dc-kube-api-access-4zngv\") pod \"controller-manager-5897d57d49-thm2s\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.567288 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-config\") pod \"controller-manager-5897d57d49-thm2s\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.570209 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-proxy-ca-bundles\") pod \"controller-manager-5897d57d49-thm2s\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.576441 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76746569-7b81-4132-8609-fee1ea8bb9dc-serving-cert\") pod \"controller-manager-5897d57d49-thm2s\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.586909 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zngv\" (UniqueName: \"kubernetes.io/projected/76746569-7b81-4132-8609-fee1ea8bb9dc-kube-api-access-4zngv\") pod \"controller-manager-5897d57d49-thm2s\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:09 crc kubenswrapper[4825]: I0219 00:11:09.640341 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:10 crc kubenswrapper[4825]: I0219 00:11:10.222779 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5897d57d49-thm2s"] Feb 19 00:11:10 crc kubenswrapper[4825]: W0219 00:11:10.227096 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76746569_7b81_4132_8609_fee1ea8bb9dc.slice/crio-5138620cc606a655043950d8caf2fe36420e958f84b273c522315e819523e12a WatchSource:0}: Error finding container 5138620cc606a655043950d8caf2fe36420e958f84b273c522315e819523e12a: Status 404 returned error can't find the container with id 5138620cc606a655043950d8caf2fe36420e958f84b273c522315e819523e12a Feb 19 00:11:10 crc kubenswrapper[4825]: I0219 00:11:10.323649 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x"] Feb 19 00:11:10 crc kubenswrapper[4825]: W0219 00:11:10.372549 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1196222_ab8c_400c_9c0c_2649d6292f53.slice/crio-4a5454d4b0eb2f36500feb03f5b977d6ad29d9d541f952d60fc1c9216c443e2e WatchSource:0}: Error finding container 4a5454d4b0eb2f36500feb03f5b977d6ad29d9d541f952d60fc1c9216c443e2e: Status 404 returned error can't find the container with id 4a5454d4b0eb2f36500feb03f5b977d6ad29d9d541f952d60fc1c9216c443e2e Feb 19 00:11:11 crc kubenswrapper[4825]: I0219 00:11:11.031369 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" event={"ID":"f1196222-ab8c-400c-9c0c-2649d6292f53","Type":"ContainerStarted","Data":"4a5454d4b0eb2f36500feb03f5b977d6ad29d9d541f952d60fc1c9216c443e2e"} Feb 19 00:11:11 crc kubenswrapper[4825]: I0219 00:11:11.032767 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" event={"ID":"76746569-7b81-4132-8609-fee1ea8bb9dc","Type":"ContainerStarted","Data":"5138620cc606a655043950d8caf2fe36420e958f84b273c522315e819523e12a"} Feb 19 00:11:11 crc kubenswrapper[4825]: I0219 00:11:11.035131 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgq6r" event={"ID":"d7139b54-5e59-487a-bbf3-2ac657e5e39d","Type":"ContainerStarted","Data":"89cbf5b63fa0686f4006122e9eca77e62177ea39bd9355c98d5b6c5152e4fee9"} Feb 19 00:11:12 crc kubenswrapper[4825]: I0219 00:11:12.067650 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zgq6r" podStartSLOduration=4.896941267 podStartE2EDuration="1m7.067630467s" podCreationTimestamp="2026-02-19 00:10:05 +0000 UTC" firstStartedPulling="2026-02-19 00:10:07.68711788 +0000 UTC m=+153.378083927" lastFinishedPulling="2026-02-19 00:11:09.85780708 +0000 UTC m=+215.548773127" observedRunningTime="2026-02-19 00:11:12.065947789 +0000 UTC m=+217.756913866" watchObservedRunningTime="2026-02-19 00:11:12.067630467 +0000 UTC m=+217.758596514" Feb 19 00:11:14 crc kubenswrapper[4825]: I0219 00:11:14.054855 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zstqt" event={"ID":"1e372937-4e80-4153-bf75-7811efb6750b","Type":"ContainerStarted","Data":"d54011be1ab49d2dab01574e019390566d6f97e95c1f3255b483e22ab5b67f6d"} Feb 19 00:11:14 crc kubenswrapper[4825]: I0219 00:11:14.056014 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" event={"ID":"f1196222-ab8c-400c-9c0c-2649d6292f53","Type":"ContainerStarted","Data":"5c7f6b6c106b688ea352024e898bf212124acc79abdca46ca1be3748f872f162"} Feb 19 00:11:14 crc kubenswrapper[4825]: I0219 00:11:14.056609 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:14 crc kubenswrapper[4825]: I0219 00:11:14.057486 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" event={"ID":"76746569-7b81-4132-8609-fee1ea8bb9dc","Type":"ContainerStarted","Data":"125e73d3227ed6aed31d3fa276fae92628db67ef58eb134bc4d2b37546e00e2a"} Feb 19 00:11:14 crc kubenswrapper[4825]: I0219 00:11:14.057801 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:14 crc kubenswrapper[4825]: I0219 00:11:14.068003 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:14 crc kubenswrapper[4825]: I0219 00:11:14.079758 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" podStartSLOduration=9.079740544 podStartE2EDuration="9.079740544s" podCreationTimestamp="2026-02-19 00:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:14.076807874 +0000 UTC m=+219.767773931" watchObservedRunningTime="2026-02-19 00:11:14.079740544 +0000 UTC m=+219.770706601" Feb 19 00:11:14 crc kubenswrapper[4825]: I0219 00:11:14.108601 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zstqt" podStartSLOduration=4.246194532 podStartE2EDuration="1m5.108573467s" podCreationTimestamp="2026-02-19 00:10:09 +0000 UTC" firstStartedPulling="2026-02-19 00:10:11.903584448 +0000 UTC m=+157.594550495" lastFinishedPulling="2026-02-19 00:11:12.765963373 +0000 UTC m=+218.456929430" observedRunningTime="2026-02-19 00:11:14.104274563 +0000 UTC m=+219.795240630" watchObservedRunningTime="2026-02-19 00:11:14.108573467 +0000 UTC m=+219.799539514" Feb 19 00:11:14 crc kubenswrapper[4825]: I0219 00:11:14.332602 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:14 crc kubenswrapper[4825]: I0219 00:11:14.364771 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" podStartSLOduration=9.364750478 podStartE2EDuration="9.364750478s" podCreationTimestamp="2026-02-19 00:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:14.13160228 +0000 UTC m=+219.822568357" watchObservedRunningTime="2026-02-19 00:11:14.364750478 +0000 UTC m=+220.055716525" Feb 19 00:11:16 crc kubenswrapper[4825]: I0219 00:11:16.368589 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:11:16 crc kubenswrapper[4825]: I0219 00:11:16.368698 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:11:16 crc kubenswrapper[4825]: I0219 00:11:16.438350 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:11:16 crc kubenswrapper[4825]: I0219 00:11:16.519839 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:11:16 crc kubenswrapper[4825]: I0219 00:11:16.519888 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:11:16 crc kubenswrapper[4825]: I0219 00:11:16.563022 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:11:17 crc kubenswrapper[4825]: I0219 00:11:17.087140 4825 generic.go:334] "Generic (PLEG): container finished" podID="87f1d2aa-1887-4322-a053-12e950fa2250" containerID="4a941f854c7c50b76e6b367b8d0a8a93250b79f323cdb03ba856c8b99b77a184" exitCode=0 Feb 19 00:11:17 crc kubenswrapper[4825]: I0219 00:11:17.087382 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psl6h" event={"ID":"87f1d2aa-1887-4322-a053-12e950fa2250","Type":"ContainerDied","Data":"4a941f854c7c50b76e6b367b8d0a8a93250b79f323cdb03ba856c8b99b77a184"} Feb 19 00:11:17 crc kubenswrapper[4825]: I0219 00:11:17.093557 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptklb" event={"ID":"ecd4f41a-e074-49eb-950d-44a9a4140304","Type":"ContainerStarted","Data":"57a4c3e23db5570ec62cfbd6c2252208d27f4d53c93a047d33861626774c45d8"} Feb 19 00:11:17 crc kubenswrapper[4825]: I0219 00:11:17.108785 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp65c" event={"ID":"3f9f8ad3-8653-427d-ad82-6b0157a57827","Type":"ContainerStarted","Data":"66ed9497eb9b60ae7d824f67562a509c1e5dcd2d90c8b16c43677483ec565057"} Feb 19 00:11:17 crc kubenswrapper[4825]: I0219 00:11:17.146099 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sp65c" podStartSLOduration=3.794446425 podStartE2EDuration="1m8.146071182s" podCreationTimestamp="2026-02-19 00:10:09 +0000 UTC" firstStartedPulling="2026-02-19 00:10:11.904142546 +0000 UTC m=+157.595108593" lastFinishedPulling="2026-02-19 00:11:16.255767303 +0000 UTC m=+221.946733350" observedRunningTime="2026-02-19 00:11:17.143239054 +0000 UTC m=+222.834205141" watchObservedRunningTime="2026-02-19 00:11:17.146071182 +0000 UTC m=+222.837037269" Feb 19 00:11:17 crc kubenswrapper[4825]: I0219 00:11:17.168266 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:11:17 crc kubenswrapper[4825]: I0219 00:11:17.169166 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ptklb" podStartSLOduration=3.782419625 podStartE2EDuration="1m9.169145517s" podCreationTimestamp="2026-02-19 00:10:08 +0000 UTC" firstStartedPulling="2026-02-19 00:10:10.868102656 +0000 UTC m=+156.559068703" lastFinishedPulling="2026-02-19 00:11:16.254828548 +0000 UTC m=+221.945794595" observedRunningTime="2026-02-19 00:11:17.169057695 +0000 UTC m=+222.860023782" watchObservedRunningTime="2026-02-19 00:11:17.169145517 +0000 UTC m=+222.860111564" Feb 19 00:11:17 crc kubenswrapper[4825]: I0219 00:11:17.169710 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:11:18 crc kubenswrapper[4825]: I0219 00:11:18.694024 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:11:18 crc kubenswrapper[4825]: I0219 00:11:18.694399 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:11:18 crc kubenswrapper[4825]: I0219 00:11:18.754292 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:11:19 crc kubenswrapper[4825]: I0219 00:11:19.125735 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psl6h" event={"ID":"87f1d2aa-1887-4322-a053-12e950fa2250","Type":"ContainerStarted","Data":"906ae0b8631d6cfae6b2f971b224f70784421e84e5e955d905f87dde91f624de"} Feb 19 00:11:19 crc kubenswrapper[4825]: I0219 00:11:19.162267 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-psl6h" podStartSLOduration=3.028893997 podStartE2EDuration="1m13.162248377s" podCreationTimestamp="2026-02-19 00:10:06 +0000 UTC" firstStartedPulling="2026-02-19 00:10:08.77589585 +0000 UTC m=+154.466861897" lastFinishedPulling="2026-02-19 00:11:18.90925024 +0000 UTC m=+224.600216277" observedRunningTime="2026-02-19 00:11:19.156812414 +0000 UTC m=+224.847778481" watchObservedRunningTime="2026-02-19 00:11:19.162248377 +0000 UTC m=+224.853214434" Feb 19 00:11:19 crc kubenswrapper[4825]: I0219 00:11:19.655874 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:11:19 crc kubenswrapper[4825]: I0219 00:11:19.656054 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:11:20 crc kubenswrapper[4825]: I0219 00:11:20.125368 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:11:20 crc kubenswrapper[4825]: I0219 00:11:20.125431 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:11:20 crc kubenswrapper[4825]: I0219 00:11:20.699111 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zstqt" podUID="1e372937-4e80-4153-bf75-7811efb6750b" containerName="registry-server" probeResult="failure" output=< Feb 19 00:11:20 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Feb 19 00:11:20 crc kubenswrapper[4825]: > Feb 19 00:11:21 crc kubenswrapper[4825]: I0219 00:11:21.167911 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sp65c" podUID="3f9f8ad3-8653-427d-ad82-6b0157a57827" containerName="registry-server" probeResult="failure" output=< Feb 19 00:11:21 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Feb 19 00:11:21 crc kubenswrapper[4825]: > Feb 19 00:11:25 crc kubenswrapper[4825]: I0219 00:11:25.512993 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5897d57d49-thm2s"] Feb 19 00:11:25 crc kubenswrapper[4825]: I0219 00:11:25.513347 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" podUID="76746569-7b81-4132-8609-fee1ea8bb9dc" containerName="controller-manager" containerID="cri-o://125e73d3227ed6aed31d3fa276fae92628db67ef58eb134bc4d2b37546e00e2a" gracePeriod=30 Feb 19 00:11:25 crc kubenswrapper[4825]: I0219 00:11:25.617711 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x"] Feb 19 00:11:25 crc kubenswrapper[4825]: I0219 00:11:25.617907 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" podUID="f1196222-ab8c-400c-9c0c-2649d6292f53" containerName="route-controller-manager" containerID="cri-o://5c7f6b6c106b688ea352024e898bf212124acc79abdca46ca1be3748f872f162" gracePeriod=30 Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.136258 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.166124 4825 generic.go:334] "Generic (PLEG): container finished" podID="f1196222-ab8c-400c-9c0c-2649d6292f53" containerID="5c7f6b6c106b688ea352024e898bf212124acc79abdca46ca1be3748f872f162" exitCode=0 Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.166209 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" event={"ID":"f1196222-ab8c-400c-9c0c-2649d6292f53","Type":"ContainerDied","Data":"5c7f6b6c106b688ea352024e898bf212124acc79abdca46ca1be3748f872f162"} Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.166240 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" event={"ID":"f1196222-ab8c-400c-9c0c-2649d6292f53","Type":"ContainerDied","Data":"4a5454d4b0eb2f36500feb03f5b977d6ad29d9d541f952d60fc1c9216c443e2e"} Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.166232 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.166278 4825 scope.go:117] "RemoveContainer" containerID="5c7f6b6c106b688ea352024e898bf212124acc79abdca46ca1be3748f872f162" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.168420 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" event={"ID":"76746569-7b81-4132-8609-fee1ea8bb9dc","Type":"ContainerDied","Data":"125e73d3227ed6aed31d3fa276fae92628db67ef58eb134bc4d2b37546e00e2a"} Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.168751 4825 generic.go:334] "Generic (PLEG): container finished" podID="76746569-7b81-4132-8609-fee1ea8bb9dc" containerID="125e73d3227ed6aed31d3fa276fae92628db67ef58eb134bc4d2b37546e00e2a" exitCode=0 Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.195729 4825 scope.go:117] "RemoveContainer" containerID="5c7f6b6c106b688ea352024e898bf212124acc79abdca46ca1be3748f872f162" Feb 19 00:11:26 crc kubenswrapper[4825]: E0219 00:11:26.196731 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c7f6b6c106b688ea352024e898bf212124acc79abdca46ca1be3748f872f162\": container with ID starting with 5c7f6b6c106b688ea352024e898bf212124acc79abdca46ca1be3748f872f162 not found: ID does not exist" containerID="5c7f6b6c106b688ea352024e898bf212124acc79abdca46ca1be3748f872f162" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.196885 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c7f6b6c106b688ea352024e898bf212124acc79abdca46ca1be3748f872f162"} err="failed to get container status \"5c7f6b6c106b688ea352024e898bf212124acc79abdca46ca1be3748f872f162\": rpc error: code = NotFound desc = could not find container \"5c7f6b6c106b688ea352024e898bf212124acc79abdca46ca1be3748f872f162\": container with ID starting with 5c7f6b6c106b688ea352024e898bf212124acc79abdca46ca1be3748f872f162 not found: ID does not exist" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.235931 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v92t\" (UniqueName: \"kubernetes.io/projected/f1196222-ab8c-400c-9c0c-2649d6292f53-kube-api-access-7v92t\") pod \"f1196222-ab8c-400c-9c0c-2649d6292f53\" (UID: \"f1196222-ab8c-400c-9c0c-2649d6292f53\") " Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.236428 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1196222-ab8c-400c-9c0c-2649d6292f53-config\") pod \"f1196222-ab8c-400c-9c0c-2649d6292f53\" (UID: \"f1196222-ab8c-400c-9c0c-2649d6292f53\") " Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.237681 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1196222-ab8c-400c-9c0c-2649d6292f53-client-ca\") pod \"f1196222-ab8c-400c-9c0c-2649d6292f53\" (UID: \"f1196222-ab8c-400c-9c0c-2649d6292f53\") " Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.237589 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1196222-ab8c-400c-9c0c-2649d6292f53-config" (OuterVolumeSpecName: "config") pod "f1196222-ab8c-400c-9c0c-2649d6292f53" (UID: "f1196222-ab8c-400c-9c0c-2649d6292f53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.238396 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1196222-ab8c-400c-9c0c-2649d6292f53-client-ca" (OuterVolumeSpecName: "client-ca") pod "f1196222-ab8c-400c-9c0c-2649d6292f53" (UID: "f1196222-ab8c-400c-9c0c-2649d6292f53"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.238898 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1196222-ab8c-400c-9c0c-2649d6292f53-serving-cert\") pod \"f1196222-ab8c-400c-9c0c-2649d6292f53\" (UID: \"f1196222-ab8c-400c-9c0c-2649d6292f53\") " Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.245632 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1196222-ab8c-400c-9c0c-2649d6292f53-kube-api-access-7v92t" (OuterVolumeSpecName: "kube-api-access-7v92t") pod "f1196222-ab8c-400c-9c0c-2649d6292f53" (UID: "f1196222-ab8c-400c-9c0c-2649d6292f53"). InnerVolumeSpecName "kube-api-access-7v92t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.257675 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1196222-ab8c-400c-9c0c-2649d6292f53-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f1196222-ab8c-400c-9c0c-2649d6292f53" (UID: "f1196222-ab8c-400c-9c0c-2649d6292f53"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.257890 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1196222-ab8c-400c-9c0c-2649d6292f53-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.258023 4825 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1196222-ab8c-400c-9c0c-2649d6292f53-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.291233 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.359569 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1196222-ab8c-400c-9c0c-2649d6292f53-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.359613 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v92t\" (UniqueName: \"kubernetes.io/projected/f1196222-ab8c-400c-9c0c-2649d6292f53-kube-api-access-7v92t\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.461059 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76746569-7b81-4132-8609-fee1ea8bb9dc-serving-cert\") pod \"76746569-7b81-4132-8609-fee1ea8bb9dc\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.461865 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-proxy-ca-bundles\") pod \"76746569-7b81-4132-8609-fee1ea8bb9dc\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.462034 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-client-ca\") pod \"76746569-7b81-4132-8609-fee1ea8bb9dc\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.462233 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-config\") pod \"76746569-7b81-4132-8609-fee1ea8bb9dc\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.462353 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zngv\" (UniqueName: \"kubernetes.io/projected/76746569-7b81-4132-8609-fee1ea8bb9dc-kube-api-access-4zngv\") pod \"76746569-7b81-4132-8609-fee1ea8bb9dc\" (UID: \"76746569-7b81-4132-8609-fee1ea8bb9dc\") " Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.463147 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-client-ca" (OuterVolumeSpecName: "client-ca") pod "76746569-7b81-4132-8609-fee1ea8bb9dc" (UID: "76746569-7b81-4132-8609-fee1ea8bb9dc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.463749 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-config" (OuterVolumeSpecName: "config") pod "76746569-7b81-4132-8609-fee1ea8bb9dc" (UID: "76746569-7b81-4132-8609-fee1ea8bb9dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.464443 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "76746569-7b81-4132-8609-fee1ea8bb9dc" (UID: "76746569-7b81-4132-8609-fee1ea8bb9dc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.469166 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76746569-7b81-4132-8609-fee1ea8bb9dc-kube-api-access-4zngv" (OuterVolumeSpecName: "kube-api-access-4zngv") pod "76746569-7b81-4132-8609-fee1ea8bb9dc" (UID: "76746569-7b81-4132-8609-fee1ea8bb9dc"). InnerVolumeSpecName "kube-api-access-4zngv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.471056 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76746569-7b81-4132-8609-fee1ea8bb9dc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "76746569-7b81-4132-8609-fee1ea8bb9dc" (UID: "76746569-7b81-4132-8609-fee1ea8bb9dc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.507970 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x"] Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.511320 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75d5f76c4f-6cs7x"] Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.563400 4825 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76746569-7b81-4132-8609-fee1ea8bb9dc-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.563447 4825 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.563463 4825 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.563478 4825 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76746569-7b81-4132-8609-fee1ea8bb9dc-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.563490 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zngv\" (UniqueName: \"kubernetes.io/projected/76746569-7b81-4132-8609-fee1ea8bb9dc-kube-api-access-4zngv\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.624880 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-w6fd4"] Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.672237 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.672295 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:11:26 crc kubenswrapper[4825]: I0219 00:11:26.724447 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.074011 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1196222-ab8c-400c-9c0c-2649d6292f53" path="/var/lib/kubelet/pods/f1196222-ab8c-400c-9c0c-2649d6292f53/volumes" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.178928 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" event={"ID":"76746569-7b81-4132-8609-fee1ea8bb9dc","Type":"ContainerDied","Data":"5138620cc606a655043950d8caf2fe36420e958f84b273c522315e819523e12a"} Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.178965 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5897d57d49-thm2s" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.179051 4825 scope.go:117] "RemoveContainer" containerID="125e73d3227ed6aed31d3fa276fae92628db67ef58eb134bc4d2b37546e00e2a" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.199675 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5897d57d49-thm2s"] Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.202452 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5897d57d49-thm2s"] Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.244292 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.303667 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x"] Feb 19 00:11:27 crc kubenswrapper[4825]: E0219 00:11:27.304089 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1196222-ab8c-400c-9c0c-2649d6292f53" containerName="route-controller-manager" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.304106 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1196222-ab8c-400c-9c0c-2649d6292f53" containerName="route-controller-manager" Feb 19 00:11:27 crc kubenswrapper[4825]: E0219 00:11:27.304133 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76746569-7b81-4132-8609-fee1ea8bb9dc" containerName="controller-manager" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.304143 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="76746569-7b81-4132-8609-fee1ea8bb9dc" containerName="controller-manager" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.304270 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="76746569-7b81-4132-8609-fee1ea8bb9dc" containerName="controller-manager" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.304288 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1196222-ab8c-400c-9c0c-2649d6292f53" containerName="route-controller-manager" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.304826 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.306259 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf"] Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.309567 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.309747 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.311268 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.312660 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.313771 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.314013 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.314166 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.318644 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf"] Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.320211 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.320302 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.320903 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.321158 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.321351 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.321627 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.324184 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x"] Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.324291 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.376164 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d447357b-1199-44f3-93f1-679487d4d4a1-config\") pod \"controller-manager-6b957f7ccb-jqt4x\" (UID: \"d447357b-1199-44f3-93f1-679487d4d4a1\") " pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.376213 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5wcj\" (UniqueName: \"kubernetes.io/projected/91d13399-7a53-41d1-9633-91fb1b5252d1-kube-api-access-k5wcj\") pod \"route-controller-manager-fc48fd444-pkkxf\" (UID: \"91d13399-7a53-41d1-9633-91fb1b5252d1\") " pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.376243 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91d13399-7a53-41d1-9633-91fb1b5252d1-client-ca\") pod \"route-controller-manager-fc48fd444-pkkxf\" (UID: \"91d13399-7a53-41d1-9633-91fb1b5252d1\") " pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.376263 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91d13399-7a53-41d1-9633-91fb1b5252d1-config\") pod \"route-controller-manager-fc48fd444-pkkxf\" (UID: \"91d13399-7a53-41d1-9633-91fb1b5252d1\") " pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.376283 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d447357b-1199-44f3-93f1-679487d4d4a1-proxy-ca-bundles\") pod \"controller-manager-6b957f7ccb-jqt4x\" (UID: \"d447357b-1199-44f3-93f1-679487d4d4a1\") " pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.376301 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7kzt\" (UniqueName: \"kubernetes.io/projected/d447357b-1199-44f3-93f1-679487d4d4a1-kube-api-access-q7kzt\") pod \"controller-manager-6b957f7ccb-jqt4x\" (UID: \"d447357b-1199-44f3-93f1-679487d4d4a1\") " pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.376326 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d447357b-1199-44f3-93f1-679487d4d4a1-client-ca\") pod \"controller-manager-6b957f7ccb-jqt4x\" (UID: \"d447357b-1199-44f3-93f1-679487d4d4a1\") " pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.376344 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91d13399-7a53-41d1-9633-91fb1b5252d1-serving-cert\") pod \"route-controller-manager-fc48fd444-pkkxf\" (UID: \"91d13399-7a53-41d1-9633-91fb1b5252d1\") " pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.376363 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d447357b-1199-44f3-93f1-679487d4d4a1-serving-cert\") pod \"controller-manager-6b957f7ccb-jqt4x\" (UID: \"d447357b-1199-44f3-93f1-679487d4d4a1\") " pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.477996 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d447357b-1199-44f3-93f1-679487d4d4a1-config\") pod \"controller-manager-6b957f7ccb-jqt4x\" (UID: \"d447357b-1199-44f3-93f1-679487d4d4a1\") " pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.478055 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5wcj\" (UniqueName: \"kubernetes.io/projected/91d13399-7a53-41d1-9633-91fb1b5252d1-kube-api-access-k5wcj\") pod \"route-controller-manager-fc48fd444-pkkxf\" (UID: \"91d13399-7a53-41d1-9633-91fb1b5252d1\") " pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.478094 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91d13399-7a53-41d1-9633-91fb1b5252d1-client-ca\") pod \"route-controller-manager-fc48fd444-pkkxf\" (UID: \"91d13399-7a53-41d1-9633-91fb1b5252d1\") " pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.478120 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91d13399-7a53-41d1-9633-91fb1b5252d1-config\") pod \"route-controller-manager-fc48fd444-pkkxf\" (UID: \"91d13399-7a53-41d1-9633-91fb1b5252d1\") " pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.478140 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d447357b-1199-44f3-93f1-679487d4d4a1-proxy-ca-bundles\") pod \"controller-manager-6b957f7ccb-jqt4x\" (UID: \"d447357b-1199-44f3-93f1-679487d4d4a1\") " pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.478155 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7kzt\" (UniqueName: \"kubernetes.io/projected/d447357b-1199-44f3-93f1-679487d4d4a1-kube-api-access-q7kzt\") pod \"controller-manager-6b957f7ccb-jqt4x\" (UID: \"d447357b-1199-44f3-93f1-679487d4d4a1\") " pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.478184 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d447357b-1199-44f3-93f1-679487d4d4a1-client-ca\") pod \"controller-manager-6b957f7ccb-jqt4x\" (UID: \"d447357b-1199-44f3-93f1-679487d4d4a1\") " pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.478203 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91d13399-7a53-41d1-9633-91fb1b5252d1-serving-cert\") pod \"route-controller-manager-fc48fd444-pkkxf\" (UID: \"91d13399-7a53-41d1-9633-91fb1b5252d1\") " pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.478225 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d447357b-1199-44f3-93f1-679487d4d4a1-serving-cert\") pod \"controller-manager-6b957f7ccb-jqt4x\" (UID: \"d447357b-1199-44f3-93f1-679487d4d4a1\") " pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.479664 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d447357b-1199-44f3-93f1-679487d4d4a1-client-ca\") pod \"controller-manager-6b957f7ccb-jqt4x\" (UID: \"d447357b-1199-44f3-93f1-679487d4d4a1\") " pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.479750 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91d13399-7a53-41d1-9633-91fb1b5252d1-client-ca\") pod \"route-controller-manager-fc48fd444-pkkxf\" (UID: \"91d13399-7a53-41d1-9633-91fb1b5252d1\") " pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.480149 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d447357b-1199-44f3-93f1-679487d4d4a1-proxy-ca-bundles\") pod \"controller-manager-6b957f7ccb-jqt4x\" (UID: \"d447357b-1199-44f3-93f1-679487d4d4a1\") " pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.480754 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91d13399-7a53-41d1-9633-91fb1b5252d1-config\") pod \"route-controller-manager-fc48fd444-pkkxf\" (UID: \"91d13399-7a53-41d1-9633-91fb1b5252d1\") " pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.480851 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d447357b-1199-44f3-93f1-679487d4d4a1-config\") pod \"controller-manager-6b957f7ccb-jqt4x\" (UID: \"d447357b-1199-44f3-93f1-679487d4d4a1\") " pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.485628 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91d13399-7a53-41d1-9633-91fb1b5252d1-serving-cert\") pod \"route-controller-manager-fc48fd444-pkkxf\" (UID: \"91d13399-7a53-41d1-9633-91fb1b5252d1\") " pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.491627 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d447357b-1199-44f3-93f1-679487d4d4a1-serving-cert\") pod \"controller-manager-6b957f7ccb-jqt4x\" (UID: \"d447357b-1199-44f3-93f1-679487d4d4a1\") " pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.496582 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5wcj\" (UniqueName: \"kubernetes.io/projected/91d13399-7a53-41d1-9633-91fb1b5252d1-kube-api-access-k5wcj\") pod \"route-controller-manager-fc48fd444-pkkxf\" (UID: \"91d13399-7a53-41d1-9633-91fb1b5252d1\") " pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.498495 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7kzt\" (UniqueName: \"kubernetes.io/projected/d447357b-1199-44f3-93f1-679487d4d4a1-kube-api-access-q7kzt\") pod \"controller-manager-6b957f7ccb-jqt4x\" (UID: \"d447357b-1199-44f3-93f1-679487d4d4a1\") " pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.632694 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.644443 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" Feb 19 00:11:27 crc kubenswrapper[4825]: I0219 00:11:27.951974 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf"] Feb 19 00:11:28 crc kubenswrapper[4825]: I0219 00:11:28.085590 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x"] Feb 19 00:11:28 crc kubenswrapper[4825]: I0219 00:11:28.199162 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" event={"ID":"91d13399-7a53-41d1-9633-91fb1b5252d1","Type":"ContainerStarted","Data":"fe33950e8bfcc14ed8ac4eb144607c2a685b1f6479ddaeba83fb8d4ef5e7af17"} Feb 19 00:11:28 crc kubenswrapper[4825]: I0219 00:11:28.199678 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" Feb 19 00:11:28 crc kubenswrapper[4825]: I0219 00:11:28.199696 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" event={"ID":"91d13399-7a53-41d1-9633-91fb1b5252d1","Type":"ContainerStarted","Data":"f90cf62510b643256e82274269f87bc86ff1f3d4899c3c4313df2da4d4b698ec"} Feb 19 00:11:28 crc kubenswrapper[4825]: I0219 00:11:28.201819 4825 patch_prober.go:28] interesting pod/route-controller-manager-fc48fd444-pkkxf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Feb 19 00:11:28 crc kubenswrapper[4825]: I0219 00:11:28.202153 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" podUID="91d13399-7a53-41d1-9633-91fb1b5252d1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Feb 19 00:11:28 crc kubenswrapper[4825]: I0219 00:11:28.204687 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" event={"ID":"d447357b-1199-44f3-93f1-679487d4d4a1","Type":"ContainerStarted","Data":"2b04cf602ffc77216c2fc36af5233b23513597fa326dd07f2c42b242fdb26501"} Feb 19 00:11:28 crc kubenswrapper[4825]: I0219 00:11:28.224525 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" podStartSLOduration=3.224497 podStartE2EDuration="3.224497s" podCreationTimestamp="2026-02-19 00:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:28.218710294 +0000 UTC m=+233.909676341" watchObservedRunningTime="2026-02-19 00:11:28.224497 +0000 UTC m=+233.915463037" Feb 19 00:11:28 crc kubenswrapper[4825]: I0219 00:11:28.742219 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:11:29 crc kubenswrapper[4825]: I0219 00:11:29.074014 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76746569-7b81-4132-8609-fee1ea8bb9dc" path="/var/lib/kubelet/pods/76746569-7b81-4132-8609-fee1ea8bb9dc/volumes" Feb 19 00:11:29 crc kubenswrapper[4825]: I0219 00:11:29.222661 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" event={"ID":"d447357b-1199-44f3-93f1-679487d4d4a1","Type":"ContainerStarted","Data":"59a566fc5fe5035ef72666603b750907d3bd70521b08cf681364ea29b322834d"} Feb 19 00:11:29 crc kubenswrapper[4825]: I0219 00:11:29.223271 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:29 crc kubenswrapper[4825]: I0219 00:11:29.227335 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-fc48fd444-pkkxf" Feb 19 00:11:29 crc kubenswrapper[4825]: I0219 00:11:29.227965 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" Feb 19 00:11:29 crc kubenswrapper[4825]: I0219 00:11:29.247081 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b957f7ccb-jqt4x" podStartSLOduration=4.247057778 podStartE2EDuration="4.247057778s" podCreationTimestamp="2026-02-19 00:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:29.243966905 +0000 UTC m=+234.934932952" watchObservedRunningTime="2026-02-19 00:11:29.247057778 +0000 UTC m=+234.938023825" Feb 19 00:11:29 crc kubenswrapper[4825]: I0219 00:11:29.509982 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-psl6h"] Feb 19 00:11:29 crc kubenswrapper[4825]: I0219 00:11:29.510186 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-psl6h" podUID="87f1d2aa-1887-4322-a053-12e950fa2250" containerName="registry-server" containerID="cri-o://906ae0b8631d6cfae6b2f971b224f70784421e84e5e955d905f87dde91f624de" gracePeriod=2 Feb 19 00:11:29 crc kubenswrapper[4825]: I0219 00:11:29.729077 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:11:29 crc kubenswrapper[4825]: I0219 00:11:29.780925 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:11:29 crc kubenswrapper[4825]: I0219 00:11:29.939446 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.029908 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f1d2aa-1887-4322-a053-12e950fa2250-catalog-content\") pod \"87f1d2aa-1887-4322-a053-12e950fa2250\" (UID: \"87f1d2aa-1887-4322-a053-12e950fa2250\") " Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.029952 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs2bq\" (UniqueName: \"kubernetes.io/projected/87f1d2aa-1887-4322-a053-12e950fa2250-kube-api-access-gs2bq\") pod \"87f1d2aa-1887-4322-a053-12e950fa2250\" (UID: \"87f1d2aa-1887-4322-a053-12e950fa2250\") " Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.029990 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f1d2aa-1887-4322-a053-12e950fa2250-utilities\") pod \"87f1d2aa-1887-4322-a053-12e950fa2250\" (UID: \"87f1d2aa-1887-4322-a053-12e950fa2250\") " Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.030783 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f1d2aa-1887-4322-a053-12e950fa2250-utilities" (OuterVolumeSpecName: "utilities") pod "87f1d2aa-1887-4322-a053-12e950fa2250" (UID: "87f1d2aa-1887-4322-a053-12e950fa2250"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.031813 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f1d2aa-1887-4322-a053-12e950fa2250-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.040776 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f1d2aa-1887-4322-a053-12e950fa2250-kube-api-access-gs2bq" (OuterVolumeSpecName: "kube-api-access-gs2bq") pod "87f1d2aa-1887-4322-a053-12e950fa2250" (UID: "87f1d2aa-1887-4322-a053-12e950fa2250"). InnerVolumeSpecName "kube-api-access-gs2bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.078608 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f1d2aa-1887-4322-a053-12e950fa2250-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87f1d2aa-1887-4322-a053-12e950fa2250" (UID: "87f1d2aa-1887-4322-a053-12e950fa2250"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.107620 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptklb"] Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.107957 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ptklb" podUID="ecd4f41a-e074-49eb-950d-44a9a4140304" containerName="registry-server" containerID="cri-o://57a4c3e23db5570ec62cfbd6c2252208d27f4d53c93a047d33861626774c45d8" gracePeriod=2 Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.132833 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f1d2aa-1887-4322-a053-12e950fa2250-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.132875 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs2bq\" (UniqueName: \"kubernetes.io/projected/87f1d2aa-1887-4322-a053-12e950fa2250-kube-api-access-gs2bq\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.171075 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.211176 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.235493 4825 generic.go:334] "Generic (PLEG): container finished" podID="87f1d2aa-1887-4322-a053-12e950fa2250" containerID="906ae0b8631d6cfae6b2f971b224f70784421e84e5e955d905f87dde91f624de" exitCode=0 Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.235626 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-psl6h" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.235623 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psl6h" event={"ID":"87f1d2aa-1887-4322-a053-12e950fa2250","Type":"ContainerDied","Data":"906ae0b8631d6cfae6b2f971b224f70784421e84e5e955d905f87dde91f624de"} Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.235769 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psl6h" event={"ID":"87f1d2aa-1887-4322-a053-12e950fa2250","Type":"ContainerDied","Data":"dd277e8978b6c6f3d32cde45ff36ef4110ab976443b23930c8c7c266f53e9d48"} Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.235795 4825 scope.go:117] "RemoveContainer" containerID="906ae0b8631d6cfae6b2f971b224f70784421e84e5e955d905f87dde91f624de" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.238497 4825 generic.go:334] "Generic (PLEG): container finished" podID="ecd4f41a-e074-49eb-950d-44a9a4140304" containerID="57a4c3e23db5570ec62cfbd6c2252208d27f4d53c93a047d33861626774c45d8" exitCode=0 Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.238594 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptklb" event={"ID":"ecd4f41a-e074-49eb-950d-44a9a4140304","Type":"ContainerDied","Data":"57a4c3e23db5570ec62cfbd6c2252208d27f4d53c93a047d33861626774c45d8"} Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.274823 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-psl6h"] Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.276722 4825 scope.go:117] "RemoveContainer" containerID="4a941f854c7c50b76e6b367b8d0a8a93250b79f323cdb03ba856c8b99b77a184" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.280833 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-psl6h"] Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.295947 4825 scope.go:117] "RemoveContainer" containerID="2957966b42006bd68da26013ec891475ad8051da28b5f0d842bbf4a39cb2fa09" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.313184 4825 scope.go:117] "RemoveContainer" containerID="906ae0b8631d6cfae6b2f971b224f70784421e84e5e955d905f87dde91f624de" Feb 19 00:11:30 crc kubenswrapper[4825]: E0219 00:11:30.313961 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"906ae0b8631d6cfae6b2f971b224f70784421e84e5e955d905f87dde91f624de\": container with ID starting with 906ae0b8631d6cfae6b2f971b224f70784421e84e5e955d905f87dde91f624de not found: ID does not exist" containerID="906ae0b8631d6cfae6b2f971b224f70784421e84e5e955d905f87dde91f624de" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.314014 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"906ae0b8631d6cfae6b2f971b224f70784421e84e5e955d905f87dde91f624de"} err="failed to get container status \"906ae0b8631d6cfae6b2f971b224f70784421e84e5e955d905f87dde91f624de\": rpc error: code = NotFound desc = could not find container \"906ae0b8631d6cfae6b2f971b224f70784421e84e5e955d905f87dde91f624de\": container with ID starting with 906ae0b8631d6cfae6b2f971b224f70784421e84e5e955d905f87dde91f624de not found: ID does not exist" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.314045 4825 scope.go:117] "RemoveContainer" containerID="4a941f854c7c50b76e6b367b8d0a8a93250b79f323cdb03ba856c8b99b77a184" Feb 19 00:11:30 crc kubenswrapper[4825]: E0219 00:11:30.314491 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a941f854c7c50b76e6b367b8d0a8a93250b79f323cdb03ba856c8b99b77a184\": container with ID starting with 4a941f854c7c50b76e6b367b8d0a8a93250b79f323cdb03ba856c8b99b77a184 not found: ID does not exist" containerID="4a941f854c7c50b76e6b367b8d0a8a93250b79f323cdb03ba856c8b99b77a184" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.314546 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a941f854c7c50b76e6b367b8d0a8a93250b79f323cdb03ba856c8b99b77a184"} err="failed to get container status \"4a941f854c7c50b76e6b367b8d0a8a93250b79f323cdb03ba856c8b99b77a184\": rpc error: code = NotFound desc = could not find container \"4a941f854c7c50b76e6b367b8d0a8a93250b79f323cdb03ba856c8b99b77a184\": container with ID starting with 4a941f854c7c50b76e6b367b8d0a8a93250b79f323cdb03ba856c8b99b77a184 not found: ID does not exist" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.314571 4825 scope.go:117] "RemoveContainer" containerID="2957966b42006bd68da26013ec891475ad8051da28b5f0d842bbf4a39cb2fa09" Feb 19 00:11:30 crc kubenswrapper[4825]: E0219 00:11:30.315116 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2957966b42006bd68da26013ec891475ad8051da28b5f0d842bbf4a39cb2fa09\": container with ID starting with 2957966b42006bd68da26013ec891475ad8051da28b5f0d842bbf4a39cb2fa09 not found: ID does not exist" containerID="2957966b42006bd68da26013ec891475ad8051da28b5f0d842bbf4a39cb2fa09" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.315136 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2957966b42006bd68da26013ec891475ad8051da28b5f0d842bbf4a39cb2fa09"} err="failed to get container status \"2957966b42006bd68da26013ec891475ad8051da28b5f0d842bbf4a39cb2fa09\": rpc error: code = NotFound desc = could not find container \"2957966b42006bd68da26013ec891475ad8051da28b5f0d842bbf4a39cb2fa09\": container with ID starting with 2957966b42006bd68da26013ec891475ad8051da28b5f0d842bbf4a39cb2fa09 not found: ID does not exist" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.565638 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.638735 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvl85\" (UniqueName: \"kubernetes.io/projected/ecd4f41a-e074-49eb-950d-44a9a4140304-kube-api-access-kvl85\") pod \"ecd4f41a-e074-49eb-950d-44a9a4140304\" (UID: \"ecd4f41a-e074-49eb-950d-44a9a4140304\") " Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.638781 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecd4f41a-e074-49eb-950d-44a9a4140304-utilities\") pod \"ecd4f41a-e074-49eb-950d-44a9a4140304\" (UID: \"ecd4f41a-e074-49eb-950d-44a9a4140304\") " Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.638925 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecd4f41a-e074-49eb-950d-44a9a4140304-catalog-content\") pod \"ecd4f41a-e074-49eb-950d-44a9a4140304\" (UID: \"ecd4f41a-e074-49eb-950d-44a9a4140304\") " Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.640271 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecd4f41a-e074-49eb-950d-44a9a4140304-utilities" (OuterVolumeSpecName: "utilities") pod "ecd4f41a-e074-49eb-950d-44a9a4140304" (UID: "ecd4f41a-e074-49eb-950d-44a9a4140304"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.644856 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecd4f41a-e074-49eb-950d-44a9a4140304-kube-api-access-kvl85" (OuterVolumeSpecName: "kube-api-access-kvl85") pod "ecd4f41a-e074-49eb-950d-44a9a4140304" (UID: "ecd4f41a-e074-49eb-950d-44a9a4140304"). InnerVolumeSpecName "kube-api-access-kvl85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.682076 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecd4f41a-e074-49eb-950d-44a9a4140304-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecd4f41a-e074-49eb-950d-44a9a4140304" (UID: "ecd4f41a-e074-49eb-950d-44a9a4140304"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.740961 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecd4f41a-e074-49eb-950d-44a9a4140304-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.741008 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvl85\" (UniqueName: \"kubernetes.io/projected/ecd4f41a-e074-49eb-950d-44a9a4140304-kube-api-access-kvl85\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:30 crc kubenswrapper[4825]: I0219 00:11:30.741028 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecd4f41a-e074-49eb-950d-44a9a4140304-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:31 crc kubenswrapper[4825]: I0219 00:11:31.075305 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f1d2aa-1887-4322-a053-12e950fa2250" path="/var/lib/kubelet/pods/87f1d2aa-1887-4322-a053-12e950fa2250/volumes" Feb 19 00:11:31 crc kubenswrapper[4825]: I0219 00:11:31.247165 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ptklb" Feb 19 00:11:31 crc kubenswrapper[4825]: I0219 00:11:31.247174 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ptklb" event={"ID":"ecd4f41a-e074-49eb-950d-44a9a4140304","Type":"ContainerDied","Data":"f3e7fcc22564a6ec7c9d633b112add576e39fc64e5ba59f4adaabe0dd239b202"} Feb 19 00:11:31 crc kubenswrapper[4825]: I0219 00:11:31.247283 4825 scope.go:117] "RemoveContainer" containerID="57a4c3e23db5570ec62cfbd6c2252208d27f4d53c93a047d33861626774c45d8" Feb 19 00:11:31 crc kubenswrapper[4825]: I0219 00:11:31.264926 4825 scope.go:117] "RemoveContainer" containerID="591e05230f73721796c0665bff47226bc7904e431acf0aec5ebbed4739fb6f94" Feb 19 00:11:31 crc kubenswrapper[4825]: I0219 00:11:31.271534 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptklb"] Feb 19 00:11:31 crc kubenswrapper[4825]: I0219 00:11:31.274243 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ptklb"] Feb 19 00:11:31 crc kubenswrapper[4825]: I0219 00:11:31.283533 4825 scope.go:117] "RemoveContainer" containerID="b7bf43c727837cbd11ac1a6ff1344b8e9f4844cf631404f4bcd3fc00474d87cf" Feb 19 00:11:32 crc kubenswrapper[4825]: I0219 00:11:32.512630 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sp65c"] Feb 19 00:11:32 crc kubenswrapper[4825]: I0219 00:11:32.513098 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sp65c" podUID="3f9f8ad3-8653-427d-ad82-6b0157a57827" containerName="registry-server" containerID="cri-o://66ed9497eb9b60ae7d824f67562a509c1e5dcd2d90c8b16c43677483ec565057" gracePeriod=2 Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.039291 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.070430 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9f8ad3-8653-427d-ad82-6b0157a57827-utilities\") pod \"3f9f8ad3-8653-427d-ad82-6b0157a57827\" (UID: \"3f9f8ad3-8653-427d-ad82-6b0157a57827\") " Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.070499 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9f8ad3-8653-427d-ad82-6b0157a57827-catalog-content\") pod \"3f9f8ad3-8653-427d-ad82-6b0157a57827\" (UID: \"3f9f8ad3-8653-427d-ad82-6b0157a57827\") " Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.070583 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72mc4\" (UniqueName: \"kubernetes.io/projected/3f9f8ad3-8653-427d-ad82-6b0157a57827-kube-api-access-72mc4\") pod \"3f9f8ad3-8653-427d-ad82-6b0157a57827\" (UID: \"3f9f8ad3-8653-427d-ad82-6b0157a57827\") " Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.072269 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecd4f41a-e074-49eb-950d-44a9a4140304" path="/var/lib/kubelet/pods/ecd4f41a-e074-49eb-950d-44a9a4140304/volumes" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.073910 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f9f8ad3-8653-427d-ad82-6b0157a57827-utilities" (OuterVolumeSpecName: "utilities") pod "3f9f8ad3-8653-427d-ad82-6b0157a57827" (UID: "3f9f8ad3-8653-427d-ad82-6b0157a57827"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.074941 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9f8ad3-8653-427d-ad82-6b0157a57827-kube-api-access-72mc4" (OuterVolumeSpecName: "kube-api-access-72mc4") pod "3f9f8ad3-8653-427d-ad82-6b0157a57827" (UID: "3f9f8ad3-8653-427d-ad82-6b0157a57827"). InnerVolumeSpecName "kube-api-access-72mc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.171634 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72mc4\" (UniqueName: \"kubernetes.io/projected/3f9f8ad3-8653-427d-ad82-6b0157a57827-kube-api-access-72mc4\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.171696 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9f8ad3-8653-427d-ad82-6b0157a57827-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.187626 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f9f8ad3-8653-427d-ad82-6b0157a57827-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f9f8ad3-8653-427d-ad82-6b0157a57827" (UID: "3f9f8ad3-8653-427d-ad82-6b0157a57827"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.261104 4825 generic.go:334] "Generic (PLEG): container finished" podID="3f9f8ad3-8653-427d-ad82-6b0157a57827" containerID="66ed9497eb9b60ae7d824f67562a509c1e5dcd2d90c8b16c43677483ec565057" exitCode=0 Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.261209 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp65c" event={"ID":"3f9f8ad3-8653-427d-ad82-6b0157a57827","Type":"ContainerDied","Data":"66ed9497eb9b60ae7d824f67562a509c1e5dcd2d90c8b16c43677483ec565057"} Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.261285 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sp65c" event={"ID":"3f9f8ad3-8653-427d-ad82-6b0157a57827","Type":"ContainerDied","Data":"11a52d4484934cc3a457548047af163a688cc5a4c13144c9b126e8cea87af0ba"} Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.261338 4825 scope.go:117] "RemoveContainer" containerID="66ed9497eb9b60ae7d824f67562a509c1e5dcd2d90c8b16c43677483ec565057" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.261414 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sp65c" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.273057 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9f8ad3-8653-427d-ad82-6b0157a57827-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.282484 4825 scope.go:117] "RemoveContainer" containerID="fd24d514997210b01dcd7c66e095acda223ceef44982b980f9b712fc16296f9e" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.298308 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sp65c"] Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.301625 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sp65c"] Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.314634 4825 scope.go:117] "RemoveContainer" containerID="37044e900b85346e4dc290d5e1ac7842046efeed3ead0bd24e377240e5a98a8d" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.332103 4825 scope.go:117] "RemoveContainer" containerID="66ed9497eb9b60ae7d824f67562a509c1e5dcd2d90c8b16c43677483ec565057" Feb 19 00:11:33 crc kubenswrapper[4825]: E0219 00:11:33.333075 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ed9497eb9b60ae7d824f67562a509c1e5dcd2d90c8b16c43677483ec565057\": container with ID starting with 66ed9497eb9b60ae7d824f67562a509c1e5dcd2d90c8b16c43677483ec565057 not found: ID does not exist" containerID="66ed9497eb9b60ae7d824f67562a509c1e5dcd2d90c8b16c43677483ec565057" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.333108 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ed9497eb9b60ae7d824f67562a509c1e5dcd2d90c8b16c43677483ec565057"} err="failed to get container status \"66ed9497eb9b60ae7d824f67562a509c1e5dcd2d90c8b16c43677483ec565057\": rpc error: code = NotFound desc = could not find container \"66ed9497eb9b60ae7d824f67562a509c1e5dcd2d90c8b16c43677483ec565057\": container with ID starting with 66ed9497eb9b60ae7d824f67562a509c1e5dcd2d90c8b16c43677483ec565057 not found: ID does not exist" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.333155 4825 scope.go:117] "RemoveContainer" containerID="fd24d514997210b01dcd7c66e095acda223ceef44982b980f9b712fc16296f9e" Feb 19 00:11:33 crc kubenswrapper[4825]: E0219 00:11:33.333943 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd24d514997210b01dcd7c66e095acda223ceef44982b980f9b712fc16296f9e\": container with ID starting with fd24d514997210b01dcd7c66e095acda223ceef44982b980f9b712fc16296f9e not found: ID does not exist" containerID="fd24d514997210b01dcd7c66e095acda223ceef44982b980f9b712fc16296f9e" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.333974 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd24d514997210b01dcd7c66e095acda223ceef44982b980f9b712fc16296f9e"} err="failed to get container status \"fd24d514997210b01dcd7c66e095acda223ceef44982b980f9b712fc16296f9e\": rpc error: code = NotFound desc = could not find container \"fd24d514997210b01dcd7c66e095acda223ceef44982b980f9b712fc16296f9e\": container with ID starting with fd24d514997210b01dcd7c66e095acda223ceef44982b980f9b712fc16296f9e not found: ID does not exist" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.333994 4825 scope.go:117] "RemoveContainer" containerID="37044e900b85346e4dc290d5e1ac7842046efeed3ead0bd24e377240e5a98a8d" Feb 19 00:11:33 crc kubenswrapper[4825]: E0219 00:11:33.334370 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37044e900b85346e4dc290d5e1ac7842046efeed3ead0bd24e377240e5a98a8d\": container with ID starting with 37044e900b85346e4dc290d5e1ac7842046efeed3ead0bd24e377240e5a98a8d not found: ID does not exist" containerID="37044e900b85346e4dc290d5e1ac7842046efeed3ead0bd24e377240e5a98a8d" Feb 19 00:11:33 crc kubenswrapper[4825]: I0219 00:11:33.334445 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37044e900b85346e4dc290d5e1ac7842046efeed3ead0bd24e377240e5a98a8d"} err="failed to get container status \"37044e900b85346e4dc290d5e1ac7842046efeed3ead0bd24e377240e5a98a8d\": rpc error: code = NotFound desc = could not find container \"37044e900b85346e4dc290d5e1ac7842046efeed3ead0bd24e377240e5a98a8d\": container with ID starting with 37044e900b85346e4dc290d5e1ac7842046efeed3ead0bd24e377240e5a98a8d not found: ID does not exist" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.077911 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9f8ad3-8653-427d-ad82-6b0157a57827" path="/var/lib/kubelet/pods/3f9f8ad3-8653-427d-ad82-6b0157a57827/volumes" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.806276 4825 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.806862 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f1d2aa-1887-4322-a053-12e950fa2250" containerName="extract-utilities" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.806992 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f1d2aa-1887-4322-a053-12e950fa2250" containerName="extract-utilities" Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.807093 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9f8ad3-8653-427d-ad82-6b0157a57827" containerName="extract-utilities" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.807178 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9f8ad3-8653-427d-ad82-6b0157a57827" containerName="extract-utilities" Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.807276 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f1d2aa-1887-4322-a053-12e950fa2250" containerName="extract-content" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.807356 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f1d2aa-1887-4322-a053-12e950fa2250" containerName="extract-content" Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.807449 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f1d2aa-1887-4322-a053-12e950fa2250" containerName="registry-server" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.807554 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f1d2aa-1887-4322-a053-12e950fa2250" containerName="registry-server" Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.807647 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9f8ad3-8653-427d-ad82-6b0157a57827" containerName="registry-server" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.807725 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9f8ad3-8653-427d-ad82-6b0157a57827" containerName="registry-server" Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.807827 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd4f41a-e074-49eb-950d-44a9a4140304" containerName="extract-content" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.807913 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd4f41a-e074-49eb-950d-44a9a4140304" containerName="extract-content" Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.808011 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9f8ad3-8653-427d-ad82-6b0157a57827" containerName="extract-content" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.808099 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9f8ad3-8653-427d-ad82-6b0157a57827" containerName="extract-content" Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.808190 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd4f41a-e074-49eb-950d-44a9a4140304" containerName="registry-server" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.808274 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd4f41a-e074-49eb-950d-44a9a4140304" containerName="registry-server" Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.808361 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd4f41a-e074-49eb-950d-44a9a4140304" containerName="extract-utilities" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.808469 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd4f41a-e074-49eb-950d-44a9a4140304" containerName="extract-utilities" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.808721 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f9f8ad3-8653-427d-ad82-6b0157a57827" containerName="registry-server" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.808829 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecd4f41a-e074-49eb-950d-44a9a4140304" containerName="registry-server" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.808912 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f1d2aa-1887-4322-a053-12e950fa2250" containerName="registry-server" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.809365 4825 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.809489 4825 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.809559 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.809847 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346" gracePeriod=15 Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.809919 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711" gracePeriod=15 Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.809966 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5" gracePeriod=15 Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.810000 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25" gracePeriod=15 Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.810112 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9" gracePeriod=15 Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.810593 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.810658 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.810674 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.810685 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.810706 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.810717 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.810742 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.810754 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.810772 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.810781 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.810792 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.810804 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.810817 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.810826 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 00:11:35 crc kubenswrapper[4825]: E0219 00:11:35.810838 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.810846 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.811055 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.811078 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.811096 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.811110 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.811121 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.811135 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.811536 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.815983 4825 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.906148 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.906216 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.906256 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.906272 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.906293 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.906308 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.906328 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:35 crc kubenswrapper[4825]: I0219 00:11:35.906352 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.007687 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.007768 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.007787 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.007825 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.007841 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.007865 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.007893 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.007919 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.007927 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.007979 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.008024 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.008059 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.008061 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.008080 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.007888 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.008098 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:36 crc kubenswrapper[4825]: E0219 00:11:36.217932 4825 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:36 crc kubenswrapper[4825]: E0219 00:11:36.218611 4825 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:36 crc kubenswrapper[4825]: E0219 00:11:36.219194 4825 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:36 crc kubenswrapper[4825]: E0219 00:11:36.219595 4825 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:36 crc kubenswrapper[4825]: E0219 00:11:36.219958 4825 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.219992 4825 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 00:11:36 crc kubenswrapper[4825]: E0219 00:11:36.220258 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" interval="200ms" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.280733 4825 generic.go:334] "Generic (PLEG): container finished" podID="b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" containerID="07129c57899d6464161c66feaa33ee2c13c00f96816fa41cd272fdc97f441b55" exitCode=0 Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.280780 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a","Type":"ContainerDied","Data":"07129c57899d6464161c66feaa33ee2c13c00f96816fa41cd272fdc97f441b55"} Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.281528 4825 status_manager.go:851] "Failed to get status for pod" podUID="b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.283157 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.284754 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.285780 4825 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9" exitCode=0 Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.285824 4825 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711" exitCode=0 Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.285837 4825 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5" exitCode=0 Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.285848 4825 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25" exitCode=2 Feb 19 00:11:36 crc kubenswrapper[4825]: I0219 00:11:36.285870 4825 scope.go:117] "RemoveContainer" containerID="a02016d6357a9cbec5bd4df6de20158bb4b6b6b52fb50d4a245cc89999b7e611" Feb 19 00:11:36 crc kubenswrapper[4825]: E0219 00:11:36.421885 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" interval="400ms" Feb 19 00:11:36 crc kubenswrapper[4825]: E0219 00:11:36.824735 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" interval="800ms" Feb 19 00:11:37 crc kubenswrapper[4825]: I0219 00:11:37.297178 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 00:11:37 crc kubenswrapper[4825]: E0219 00:11:37.626435 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" interval="1.6s" Feb 19 00:11:37 crc kubenswrapper[4825]: I0219 00:11:37.666242 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:11:37 crc kubenswrapper[4825]: I0219 00:11:37.666802 4825 status_manager.go:851] "Failed to get status for pod" podUID="b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:37 crc kubenswrapper[4825]: I0219 00:11:37.730539 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-kubelet-dir\") pod \"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a\" (UID: \"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a\") " Feb 19 00:11:37 crc kubenswrapper[4825]: I0219 00:11:37.730584 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-var-lock\") pod \"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a\" (UID: \"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a\") " Feb 19 00:11:37 crc kubenswrapper[4825]: I0219 00:11:37.730716 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-kube-api-access\") pod \"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a\" (UID: \"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a\") " Feb 19 00:11:37 crc kubenswrapper[4825]: I0219 00:11:37.730652 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-var-lock" (OuterVolumeSpecName: "var-lock") pod "b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" (UID: "b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:11:37 crc kubenswrapper[4825]: I0219 00:11:37.730650 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" (UID: "b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:11:37 crc kubenswrapper[4825]: I0219 00:11:37.731072 4825 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:37 crc kubenswrapper[4825]: I0219 00:11:37.731095 4825 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:37 crc kubenswrapper[4825]: I0219 00:11:37.736939 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" (UID: "b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:37 crc kubenswrapper[4825]: I0219 00:11:37.833123 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.313342 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.315323 4825 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346" exitCode=0 Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.315416 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e990894e71fc5e19a177ad7276d937f56c23cc39bf3741f8bb1bfe0f512cb65" Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.317461 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.318487 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a","Type":"ContainerDied","Data":"870c524419b7b9217f39b8e9def94fc29887b908a20531c5674a96e6995eabe3"} Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.318541 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870c524419b7b9217f39b8e9def94fc29887b908a20531c5674a96e6995eabe3" Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.318579 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.318854 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.319861 4825 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.320306 4825 status_manager.go:851] "Failed to get status for pod" podUID="b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.347920 4825 status_manager.go:851] "Failed to get status for pod" podUID="b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.348341 4825 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.440327 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.440416 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.440483 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.440484 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.440545 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.440715 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.440746 4825 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.440834 4825 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:38 crc kubenswrapper[4825]: I0219 00:11:38.542143 4825 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:39 crc kubenswrapper[4825]: I0219 00:11:39.075962 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 19 00:11:39 crc kubenswrapper[4825]: E0219 00:11:39.227791 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" interval="3.2s" Feb 19 00:11:39 crc kubenswrapper[4825]: I0219 00:11:39.324209 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:39 crc kubenswrapper[4825]: I0219 00:11:39.325831 4825 status_manager.go:851] "Failed to get status for pod" podUID="b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:39 crc kubenswrapper[4825]: I0219 00:11:39.326217 4825 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:39 crc kubenswrapper[4825]: I0219 00:11:39.328471 4825 status_manager.go:851] "Failed to get status for pod" podUID="b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:39 crc kubenswrapper[4825]: I0219 00:11:39.329339 4825 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:40 crc kubenswrapper[4825]: E0219 00:11:40.881750 4825 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.207:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:40 crc kubenswrapper[4825]: I0219 00:11:40.882307 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:40 crc kubenswrapper[4825]: E0219 00:11:40.922058 4825 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.207:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18957d64f04807f3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:11:40.921612275 +0000 UTC m=+246.612578342,LastTimestamp:2026-02-19 00:11:40.921612275 +0000 UTC m=+246.612578342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:11:41 crc kubenswrapper[4825]: I0219 00:11:41.345479 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"10b562072cedd40ccea8b26adb4a25a28665899404b157c5b321bb514d49c1c2"} Feb 19 00:11:41 crc kubenswrapper[4825]: I0219 00:11:41.346148 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6d3a04ad7c4135bfef33bc43dae1d99cf05c0c07b4a55888a6859490bdb75627"} Feb 19 00:11:41 crc kubenswrapper[4825]: I0219 00:11:41.347038 4825 status_manager.go:851] "Failed to get status for pod" podUID="b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:41 crc kubenswrapper[4825]: E0219 00:11:41.347815 4825 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.207:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:42 crc kubenswrapper[4825]: E0219 00:11:42.429070 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" interval="6.4s" Feb 19 00:11:45 crc kubenswrapper[4825]: I0219 00:11:45.073784 4825 status_manager.go:851] "Failed to get status for pod" podUID="b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:47 crc kubenswrapper[4825]: E0219 00:11:47.050474 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:47Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:47Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:47Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:47Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:47 crc kubenswrapper[4825]: E0219 00:11:47.051035 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:47 crc kubenswrapper[4825]: E0219 00:11:47.051629 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:47 crc kubenswrapper[4825]: E0219 00:11:47.052162 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:47 crc kubenswrapper[4825]: E0219 00:11:47.052689 4825 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:47 crc kubenswrapper[4825]: E0219 00:11:47.052723 4825 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:11:47 crc kubenswrapper[4825]: E0219 00:11:47.091177 4825 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.207:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18957d64f04807f3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:11:40.921612275 +0000 UTC m=+246.612578342,LastTimestamp:2026-02-19 00:11:40.921612275 +0000 UTC m=+246.612578342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:11:48 crc kubenswrapper[4825]: I0219 00:11:48.065477 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:48 crc kubenswrapper[4825]: I0219 00:11:48.066744 4825 status_manager.go:851] "Failed to get status for pod" podUID="b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:48 crc kubenswrapper[4825]: I0219 00:11:48.090901 4825 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="926bf6b3-77e1-4fd1-846e-3bb651c25002" Feb 19 00:11:48 crc kubenswrapper[4825]: I0219 00:11:48.090936 4825 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="926bf6b3-77e1-4fd1-846e-3bb651c25002" Feb 19 00:11:48 crc kubenswrapper[4825]: E0219 00:11:48.091838 4825 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:48 crc kubenswrapper[4825]: I0219 00:11:48.092859 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:48 crc kubenswrapper[4825]: W0219 00:11:48.169363 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-e46e3ac1acd8fc74848e610a6eab0ae4812855b4ce9b7038f498fc195a6dc63c WatchSource:0}: Error finding container e46e3ac1acd8fc74848e610a6eab0ae4812855b4ce9b7038f498fc195a6dc63c: Status 404 returned error can't find the container with id e46e3ac1acd8fc74848e610a6eab0ae4812855b4ce9b7038f498fc195a6dc63c Feb 19 00:11:48 crc kubenswrapper[4825]: I0219 00:11:48.398561 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e46e3ac1acd8fc74848e610a6eab0ae4812855b4ce9b7038f498fc195a6dc63c"} Feb 19 00:11:48 crc kubenswrapper[4825]: E0219 00:11:48.831484 4825 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.207:6443: connect: connection refused" interval="7s" Feb 19 00:11:48 crc kubenswrapper[4825]: I0219 00:11:48.839434 4825 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 00:11:48 crc kubenswrapper[4825]: I0219 00:11:48.839619 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 00:11:49 crc kubenswrapper[4825]: I0219 00:11:49.409886 4825 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8b62d8b58898c1d21126b3330ff1e470cb9210067807d954e0f88303a1bce934" exitCode=0 Feb 19 00:11:49 crc kubenswrapper[4825]: I0219 00:11:49.410002 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8b62d8b58898c1d21126b3330ff1e470cb9210067807d954e0f88303a1bce934"} Feb 19 00:11:49 crc kubenswrapper[4825]: I0219 00:11:49.410662 4825 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="926bf6b3-77e1-4fd1-846e-3bb651c25002" Feb 19 00:11:49 crc kubenswrapper[4825]: I0219 00:11:49.410714 4825 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="926bf6b3-77e1-4fd1-846e-3bb651c25002" Feb 19 00:11:49 crc kubenswrapper[4825]: I0219 00:11:49.411114 4825 status_manager.go:851] "Failed to get status for pod" podUID="b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:49 crc kubenswrapper[4825]: E0219 00:11:49.411475 4825 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:49 crc kubenswrapper[4825]: I0219 00:11:49.415349 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 00:11:49 crc kubenswrapper[4825]: I0219 00:11:49.415457 4825 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88" exitCode=1 Feb 19 00:11:49 crc kubenswrapper[4825]: I0219 00:11:49.415561 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88"} Feb 19 00:11:49 crc kubenswrapper[4825]: I0219 00:11:49.416352 4825 scope.go:117] "RemoveContainer" containerID="3810deb951d433cc874ad8ad48870caea8f98837af8ddf04276eae5508906d88" Feb 19 00:11:49 crc kubenswrapper[4825]: I0219 00:11:49.416818 4825 status_manager.go:851] "Failed to get status for pod" podUID="b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:49 crc kubenswrapper[4825]: I0219 00:11:49.417294 4825 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.207:6443: connect: connection refused" Feb 19 00:11:49 crc kubenswrapper[4825]: I0219 00:11:49.699989 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:11:50 crc kubenswrapper[4825]: I0219 00:11:50.429859 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"97a0ab92e85bf93a4ad8c992b40a7f408cc360daf29d32fb8b5081b1dfea124b"} Feb 19 00:11:50 crc kubenswrapper[4825]: I0219 00:11:50.429905 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dc0ed1dcc943ae7f8ecfa7c6ea3a50a913c08e32bfea3d25510be957669b3fa8"} Feb 19 00:11:50 crc kubenswrapper[4825]: I0219 00:11:50.429915 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8bafe74b7e584a4dcea569fca280c47f271d327429f086355f2b040721d68786"} Feb 19 00:11:50 crc kubenswrapper[4825]: I0219 00:11:50.434086 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 00:11:50 crc kubenswrapper[4825]: I0219 00:11:50.434152 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"825bc98eb2d22fca9cdd235e54e79e367b454b475586adb3ab556c4dd2baee97"} Feb 19 00:11:51 crc kubenswrapper[4825]: I0219 00:11:51.447589 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ae42a16120d6ecbbf7daed04f529043ac8ec59ca54899df62c38ead5af6adf92"} Feb 19 00:11:51 crc kubenswrapper[4825]: I0219 00:11:51.448143 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a02411b80c5deeb76bfebe7aa868af83fbd7b01181ef1dbbcabe75a3cc1a7023"} Feb 19 00:11:51 crc kubenswrapper[4825]: I0219 00:11:51.448165 4825 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="926bf6b3-77e1-4fd1-846e-3bb651c25002" Feb 19 00:11:51 crc kubenswrapper[4825]: I0219 00:11:51.448216 4825 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="926bf6b3-77e1-4fd1-846e-3bb651c25002" Feb 19 00:11:51 crc kubenswrapper[4825]: I0219 00:11:51.448275 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:51 crc kubenswrapper[4825]: I0219 00:11:51.660380 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" podUID="a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" containerName="oauth-openshift" containerID="cri-o://9934128c27646ec1be9fe8286ebc8225729720f00583f6b9969614f632b7d0b1" gracePeriod=15 Feb 19 00:11:52 crc kubenswrapper[4825]: I0219 00:11:52.180392 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:11:52 crc kubenswrapper[4825]: I0219 00:11:52.455472 4825 generic.go:334] "Generic (PLEG): container finished" podID="a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" containerID="9934128c27646ec1be9fe8286ebc8225729720f00583f6b9969614f632b7d0b1" exitCode=0 Feb 19 00:11:52 crc kubenswrapper[4825]: I0219 00:11:52.455548 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" event={"ID":"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c","Type":"ContainerDied","Data":"9934128c27646ec1be9fe8286ebc8225729720f00583f6b9969614f632b7d0b1"} Feb 19 00:11:52 crc kubenswrapper[4825]: I0219 00:11:52.456051 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" event={"ID":"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c","Type":"ContainerDied","Data":"0752c08aa49bca121cbc9dbdc95a72c824bfd8c31f5aab57eaa008536f8438d3"} Feb 19 00:11:52 crc kubenswrapper[4825]: I0219 00:11:52.456091 4825 scope.go:117] "RemoveContainer" containerID="9934128c27646ec1be9fe8286ebc8225729720f00583f6b9969614f632b7d0b1" Feb 19 00:11:52 crc kubenswrapper[4825]: I0219 00:11:52.455598 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-w6fd4" Feb 19 00:11:52 crc kubenswrapper[4825]: I0219 00:11:52.475285 4825 scope.go:117] "RemoveContainer" containerID="9934128c27646ec1be9fe8286ebc8225729720f00583f6b9969614f632b7d0b1" Feb 19 00:11:52 crc kubenswrapper[4825]: E0219 00:11:52.475692 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9934128c27646ec1be9fe8286ebc8225729720f00583f6b9969614f632b7d0b1\": container with ID starting with 9934128c27646ec1be9fe8286ebc8225729720f00583f6b9969614f632b7d0b1 not found: ID does not exist" containerID="9934128c27646ec1be9fe8286ebc8225729720f00583f6b9969614f632b7d0b1" Feb 19 00:11:52 crc kubenswrapper[4825]: I0219 00:11:52.475724 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9934128c27646ec1be9fe8286ebc8225729720f00583f6b9969614f632b7d0b1"} err="failed to get container status \"9934128c27646ec1be9fe8286ebc8225729720f00583f6b9969614f632b7d0b1\": rpc error: code = NotFound desc = could not find container \"9934128c27646ec1be9fe8286ebc8225729720f00583f6b9969614f632b7d0b1\": container with ID starting with 9934128c27646ec1be9fe8286ebc8225729720f00583f6b9969614f632b7d0b1 not found: ID does not exist" Feb 19 00:11:53 crc kubenswrapper[4825]: I0219 00:11:53.093888 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:53 crc kubenswrapper[4825]: I0219 00:11:53.094159 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:53 crc kubenswrapper[4825]: I0219 00:11:53.098314 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:56 crc kubenswrapper[4825]: I0219 00:11:56.835655 4825 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:56 crc kubenswrapper[4825]: I0219 00:11:56.893497 4825 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="135f12ba-eea0-418e-a37b-c4e8fec753e0" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.206378 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jwwz\" (UniqueName: \"kubernetes.io/projected/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-kube-api-access-8jwwz\") pod \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.206446 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-audit-policies\") pod \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.206488 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-session\") pod \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.206529 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-cliconfig\") pod \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.206548 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-router-certs\") pod \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.206570 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-ocp-branding-template\") pod \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.206591 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-trusted-ca-bundle\") pod \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.206613 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-login\") pod \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.206630 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-serving-cert\") pod \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.206665 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-idp-0-file-data\") pod \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.206681 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-error\") pod \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.206719 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-audit-dir\") pod \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.206736 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-service-ca\") pod \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.206757 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-provider-selection\") pod \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\" (UID: \"a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c\") " Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.207614 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" (UID: "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.207731 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" (UID: "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.208087 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" (UID: "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.209432 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" (UID: "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.209461 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" (UID: "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.213601 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" (UID: "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.215023 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" (UID: "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.216704 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" (UID: "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.216917 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" (UID: "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.216975 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-kube-api-access-8jwwz" (OuterVolumeSpecName: "kube-api-access-8jwwz") pod "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" (UID: "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c"). InnerVolumeSpecName "kube-api-access-8jwwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.217267 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" (UID: "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.217704 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" (UID: "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.218481 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" (UID: "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.219914 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" (UID: "a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.288390 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.291710 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.308541 4825 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.308572 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.308596 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.308607 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jwwz\" (UniqueName: \"kubernetes.io/projected/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-kube-api-access-8jwwz\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.308619 4825 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.308628 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.308637 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.308646 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.308655 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.308663 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.308672 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.308680 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.308692 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.308700 4825 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.491551 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.492210 4825 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="926bf6b3-77e1-4fd1-846e-3bb651c25002" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.492293 4825 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="926bf6b3-77e1-4fd1-846e-3bb651c25002" Feb 19 00:11:57 crc kubenswrapper[4825]: I0219 00:11:57.494058 4825 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="135f12ba-eea0-418e-a37b-c4e8fec753e0" Feb 19 00:12:06 crc kubenswrapper[4825]: I0219 00:12:06.430872 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 00:12:07 crc kubenswrapper[4825]: I0219 00:12:07.500556 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 00:12:07 crc kubenswrapper[4825]: I0219 00:12:07.500983 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 00:12:07 crc kubenswrapper[4825]: I0219 00:12:07.994180 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 00:12:08 crc kubenswrapper[4825]: I0219 00:12:08.231277 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 00:12:08 crc kubenswrapper[4825]: I0219 00:12:08.296195 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 00:12:08 crc kubenswrapper[4825]: I0219 00:12:08.335247 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 00:12:08 crc kubenswrapper[4825]: I0219 00:12:08.344984 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 00:12:08 crc kubenswrapper[4825]: I0219 00:12:08.355464 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 00:12:08 crc kubenswrapper[4825]: I0219 00:12:08.515195 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 00:12:08 crc kubenswrapper[4825]: I0219 00:12:08.554737 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 00:12:08 crc kubenswrapper[4825]: I0219 00:12:08.579200 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 00:12:08 crc kubenswrapper[4825]: I0219 00:12:08.579838 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 00:12:08 crc kubenswrapper[4825]: I0219 00:12:08.847174 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:12:08 crc kubenswrapper[4825]: I0219 00:12:08.990483 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 00:12:09 crc kubenswrapper[4825]: I0219 00:12:09.104033 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 00:12:09 crc kubenswrapper[4825]: I0219 00:12:09.174077 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 00:12:09 crc kubenswrapper[4825]: I0219 00:12:09.263698 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 00:12:09 crc kubenswrapper[4825]: I0219 00:12:09.382018 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 00:12:09 crc kubenswrapper[4825]: I0219 00:12:09.465733 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 00:12:09 crc kubenswrapper[4825]: I0219 00:12:09.538402 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 00:12:09 crc kubenswrapper[4825]: I0219 00:12:09.545026 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 00:12:09 crc kubenswrapper[4825]: I0219 00:12:09.605999 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 00:12:09 crc kubenswrapper[4825]: I0219 00:12:09.624784 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 00:12:09 crc kubenswrapper[4825]: I0219 00:12:09.701886 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 00:12:09 crc kubenswrapper[4825]: I0219 00:12:09.747029 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 00:12:09 crc kubenswrapper[4825]: I0219 00:12:09.842624 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 00:12:09 crc kubenswrapper[4825]: I0219 00:12:09.870583 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 00:12:10 crc kubenswrapper[4825]: I0219 00:12:10.203236 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 00:12:10 crc kubenswrapper[4825]: I0219 00:12:10.400582 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 00:12:10 crc kubenswrapper[4825]: I0219 00:12:10.666009 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 00:12:10 crc kubenswrapper[4825]: I0219 00:12:10.700023 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 00:12:10 crc kubenswrapper[4825]: I0219 00:12:10.959846 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.039329 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.064962 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.139567 4825 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.186812 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.214085 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.292793 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.368296 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.421375 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.476932 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.509597 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.510166 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.578692 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.593604 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.619347 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.647645 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.700496 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.790480 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.796841 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.867896 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 00:12:11 crc kubenswrapper[4825]: I0219 00:12:11.905051 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.007268 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.148828 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.151099 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.176417 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.296237 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.328785 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.384625 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.396390 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.448644 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.468452 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.490776 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.499927 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.556761 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.603618 4825 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.679340 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.800637 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 00:12:12 crc kubenswrapper[4825]: I0219 00:12:12.959311 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.003480 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.004743 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.032143 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.035669 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.047353 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.095625 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.100262 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.138397 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.155056 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.197447 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.210531 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.373912 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.491359 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.493685 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.786437 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.792475 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.807126 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.813073 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.879487 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 00:12:13 crc kubenswrapper[4825]: I0219 00:12:13.884289 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 00:12:14 crc kubenswrapper[4825]: I0219 00:12:14.011898 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 00:12:14 crc kubenswrapper[4825]: I0219 00:12:14.078419 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 00:12:14 crc kubenswrapper[4825]: I0219 00:12:14.162264 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 00:12:14 crc kubenswrapper[4825]: I0219 00:12:14.255572 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 00:12:14 crc kubenswrapper[4825]: I0219 00:12:14.351344 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 00:12:14 crc kubenswrapper[4825]: I0219 00:12:14.486546 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 00:12:14 crc kubenswrapper[4825]: I0219 00:12:14.492480 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 00:12:14 crc kubenswrapper[4825]: I0219 00:12:14.617645 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 00:12:14 crc kubenswrapper[4825]: I0219 00:12:14.659793 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 00:12:14 crc kubenswrapper[4825]: I0219 00:12:14.814286 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 00:12:14 crc kubenswrapper[4825]: I0219 00:12:14.862931 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 00:12:14 crc kubenswrapper[4825]: I0219 00:12:14.992569 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.021441 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.145208 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.246128 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.264095 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.330614 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.339269 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.362989 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.387326 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.466023 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.489386 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.516210 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.522733 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.583669 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.711727 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.730606 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.735393 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.803235 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.866632 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.866691 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 00:12:15 crc kubenswrapper[4825]: I0219 00:12:15.971294 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.134258 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.287579 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.297543 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.347439 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.444670 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.562954 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.577270 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.584915 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.594291 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.606549 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.611023 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.636015 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.703700 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.712133 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.718163 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.825416 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.895681 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 00:12:16 crc kubenswrapper[4825]: I0219 00:12:16.969771 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.007698 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.070061 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.076010 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.166559 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.188010 4825 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.193284 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-w6fd4"] Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.193345 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.199332 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.199605 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.216269 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.216247868 podStartE2EDuration="21.216247868s" podCreationTimestamp="2026-02-19 00:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:17.211634691 +0000 UTC m=+282.902600768" watchObservedRunningTime="2026-02-19 00:12:17.216247868 +0000 UTC m=+282.907213915" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.219882 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.249714 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.249725 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.293195 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.318871 4825 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.372174 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.430758 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.553769 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.562346 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.565098 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.567226 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.605687 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.808901 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.841602 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.945366 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.969847 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 00:12:17 crc kubenswrapper[4825]: I0219 00:12:17.981584 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.017933 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.079660 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.182332 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.193211 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.204272 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.286166 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.302832 4825 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.303217 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://10b562072cedd40ccea8b26adb4a25a28665899404b157c5b321bb514d49c1c2" gracePeriod=5 Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.365450 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.372976 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.379569 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.407185 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.428138 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.468725 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.638412 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.698756 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.739753 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.869146 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.869173 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.875773 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 00:12:18 crc kubenswrapper[4825]: I0219 00:12:18.937399 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.003873 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.015544 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.031636 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-8675767fb8-fg5vd"] Feb 19 00:12:19 crc kubenswrapper[4825]: E0219 00:12:19.031830 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" containerName="oauth-openshift" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.031841 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" containerName="oauth-openshift" Feb 19 00:12:19 crc kubenswrapper[4825]: E0219 00:12:19.031850 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.031856 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 00:12:19 crc kubenswrapper[4825]: E0219 00:12:19.031866 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" containerName="installer" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.031872 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" containerName="installer" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.031962 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0deeb6a-6cd6-4cdb-98e0-e0b501a5cf7a" containerName="installer" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.031979 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" containerName="oauth-openshift" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.031987 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.032336 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.035707 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.036158 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.036478 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.037724 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.038854 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.038969 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.039099 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.039146 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.039230 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.039349 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.039378 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.045065 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.049166 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.052539 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.054816 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8675767fb8-fg5vd"] Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.059352 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.073015 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c" path="/var/lib/kubelet/pods/a8882a14-ae57-4c7f-acfc-9ce0d1a59c0c/volumes" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.080324 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.088573 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-router-certs\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.088625 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-user-template-error\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.088661 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-service-ca\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.088720 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-user-template-login\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.088746 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.088766 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.088786 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.088805 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-session\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.088831 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.088946 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.089069 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4f346385-179f-47f0-8f1e-8187ffe61b74-audit-dir\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.089184 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4f346385-179f-47f0-8f1e-8187ffe61b74-audit-policies\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.089242 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45n29\" (UniqueName: \"kubernetes.io/projected/4f346385-179f-47f0-8f1e-8187ffe61b74-kube-api-access-45n29\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.089304 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.133632 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.190588 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-service-ca\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.190635 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-user-template-login\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.190659 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.190680 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.190699 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.190855 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-session\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.191419 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-service-ca\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.191627 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.191677 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.191722 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4f346385-179f-47f0-8f1e-8187ffe61b74-audit-dir\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.191765 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4f346385-179f-47f0-8f1e-8187ffe61b74-audit-policies\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.191793 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45n29\" (UniqueName: \"kubernetes.io/projected/4f346385-179f-47f0-8f1e-8187ffe61b74-kube-api-access-45n29\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.191825 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.191825 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4f346385-179f-47f0-8f1e-8187ffe61b74-audit-dir\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.191905 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-router-certs\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.191935 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-user-template-error\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.192009 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.192563 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.192772 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4f346385-179f-47f0-8f1e-8187ffe61b74-audit-policies\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.215807 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.215901 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-session\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.216074 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-user-template-error\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.216490 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-user-template-login\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.216693 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.217991 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.218093 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.218577 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45n29\" (UniqueName: \"kubernetes.io/projected/4f346385-179f-47f0-8f1e-8187ffe61b74-kube-api-access-45n29\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.223211 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-router-certs\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.224152 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4f346385-179f-47f0-8f1e-8187ffe61b74-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8675767fb8-fg5vd\" (UID: \"4f346385-179f-47f0-8f1e-8187ffe61b74\") " pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.225328 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.234882 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.293736 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.319572 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.351426 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.412251 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.482203 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.548267 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.581252 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.581597 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.677638 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.746245 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.766310 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.891172 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 00:12:19 crc kubenswrapper[4825]: I0219 00:12:19.945117 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 00:12:20 crc kubenswrapper[4825]: I0219 00:12:20.012854 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 00:12:20 crc kubenswrapper[4825]: I0219 00:12:20.039374 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 00:12:20 crc kubenswrapper[4825]: I0219 00:12:20.202916 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 00:12:20 crc kubenswrapper[4825]: I0219 00:12:20.208246 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 00:12:20 crc kubenswrapper[4825]: I0219 00:12:20.244286 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 00:12:20 crc kubenswrapper[4825]: I0219 00:12:20.248686 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 00:12:20 crc kubenswrapper[4825]: I0219 00:12:20.261874 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 00:12:20 crc kubenswrapper[4825]: I0219 00:12:20.270925 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 00:12:20 crc kubenswrapper[4825]: I0219 00:12:20.298910 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 00:12:20 crc kubenswrapper[4825]: I0219 00:12:20.388628 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 00:12:20 crc kubenswrapper[4825]: I0219 00:12:20.411174 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 00:12:20 crc kubenswrapper[4825]: I0219 00:12:20.839574 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 00:12:20 crc kubenswrapper[4825]: I0219 00:12:20.925645 4825 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 00:12:20 crc kubenswrapper[4825]: I0219 00:12:20.972952 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 00:12:21 crc kubenswrapper[4825]: I0219 00:12:21.051475 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 00:12:21 crc kubenswrapper[4825]: I0219 00:12:21.311876 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 00:12:21 crc kubenswrapper[4825]: I0219 00:12:21.538776 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 00:12:21 crc kubenswrapper[4825]: I0219 00:12:21.592359 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 00:12:21 crc kubenswrapper[4825]: I0219 00:12:21.635710 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 00:12:21 crc kubenswrapper[4825]: I0219 00:12:21.722307 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 00:12:21 crc kubenswrapper[4825]: I0219 00:12:21.760277 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 00:12:22 crc kubenswrapper[4825]: I0219 00:12:22.013037 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 00:12:22 crc kubenswrapper[4825]: I0219 00:12:22.014294 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 00:12:22 crc kubenswrapper[4825]: I0219 00:12:22.075374 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 00:12:22 crc kubenswrapper[4825]: I0219 00:12:22.079851 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 00:12:22 crc kubenswrapper[4825]: I0219 00:12:22.089774 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 00:12:22 crc kubenswrapper[4825]: I0219 00:12:22.108743 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 00:12:22 crc kubenswrapper[4825]: I0219 00:12:22.211459 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 00:12:22 crc kubenswrapper[4825]: I0219 00:12:22.232011 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 00:12:22 crc kubenswrapper[4825]: I0219 00:12:22.252818 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 00:12:22 crc kubenswrapper[4825]: I0219 00:12:22.331408 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8675767fb8-fg5vd"] Feb 19 00:12:22 crc kubenswrapper[4825]: W0219 00:12:22.336739 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f346385_179f_47f0_8f1e_8187ffe61b74.slice/crio-349ca503588cc23677d959fc91a3a0dd505907444d65375c3ed402f2ddb1bf7a WatchSource:0}: Error finding container 349ca503588cc23677d959fc91a3a0dd505907444d65375c3ed402f2ddb1bf7a: Status 404 returned error can't find the container with id 349ca503588cc23677d959fc91a3a0dd505907444d65375c3ed402f2ddb1bf7a Feb 19 00:12:22 crc kubenswrapper[4825]: I0219 00:12:22.419362 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 00:12:22 crc kubenswrapper[4825]: I0219 00:12:22.646684 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" event={"ID":"4f346385-179f-47f0-8f1e-8187ffe61b74","Type":"ContainerStarted","Data":"349ca503588cc23677d959fc91a3a0dd505907444d65375c3ed402f2ddb1bf7a"} Feb 19 00:12:22 crc kubenswrapper[4825]: I0219 00:12:22.715784 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 00:12:22 crc kubenswrapper[4825]: I0219 00:12:22.827317 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.071923 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.087464 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.562034 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.653395 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.653449 4825 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="10b562072cedd40ccea8b26adb4a25a28665899404b157c5b321bb514d49c1c2" exitCode=137 Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.654893 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" event={"ID":"4f346385-179f-47f0-8f1e-8187ffe61b74","Type":"ContainerStarted","Data":"620a62eb42f84bf1a1127a4908a6543f41faed952537050e70c9da6d43350d12"} Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.656189 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.661441 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.675838 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-8675767fb8-fg5vd" podStartSLOduration=57.675821938 podStartE2EDuration="57.675821938s" podCreationTimestamp="2026-02-19 00:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:23.672149176 +0000 UTC m=+289.363115223" watchObservedRunningTime="2026-02-19 00:12:23.675821938 +0000 UTC m=+289.366787975" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.872499 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.872632 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.958214 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.958316 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.958369 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.958407 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.958451 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.958468 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.958458 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.958498 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.958646 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.958727 4825 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.958742 4825 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.958755 4825 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.958768 4825 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:23 crc kubenswrapper[4825]: I0219 00:12:23.965524 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:12:24 crc kubenswrapper[4825]: I0219 00:12:24.059480 4825 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:24 crc kubenswrapper[4825]: I0219 00:12:24.419022 4825 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 00:12:24 crc kubenswrapper[4825]: I0219 00:12:24.661734 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 00:12:24 crc kubenswrapper[4825]: I0219 00:12:24.661882 4825 scope.go:117] "RemoveContainer" containerID="10b562072cedd40ccea8b26adb4a25a28665899404b157c5b321bb514d49c1c2" Feb 19 00:12:24 crc kubenswrapper[4825]: I0219 00:12:24.661926 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:25 crc kubenswrapper[4825]: I0219 00:12:25.078663 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 19 00:12:25 crc kubenswrapper[4825]: I0219 00:12:25.733413 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 00:12:34 crc kubenswrapper[4825]: I0219 00:12:34.814540 4825 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 19 00:12:34 crc kubenswrapper[4825]: I0219 00:12:34.951661 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8xs8g"] Feb 19 00:12:34 crc kubenswrapper[4825]: I0219 00:12:34.951963 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8xs8g" podUID="fe3e4f27-2ef4-4187-911b-135249a4454f" containerName="registry-server" containerID="cri-o://f9189a922a11ba5a6f9a2027b230e4be05beccaf2944046b21010c2d1015046b" gracePeriod=30 Feb 19 00:12:34 crc kubenswrapper[4825]: I0219 00:12:34.961607 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zgq6r"] Feb 19 00:12:34 crc kubenswrapper[4825]: I0219 00:12:34.961889 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zgq6r" podUID="d7139b54-5e59-487a-bbf3-2ac657e5e39d" containerName="registry-server" containerID="cri-o://89cbf5b63fa0686f4006122e9eca77e62177ea39bd9355c98d5b6c5152e4fee9" gracePeriod=30 Feb 19 00:12:34 crc kubenswrapper[4825]: I0219 00:12:34.967393 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dc72t"] Feb 19 00:12:34 crc kubenswrapper[4825]: I0219 00:12:34.967695 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" podUID="76110eca-3d21-477d-9656-965eaa768c21" containerName="marketplace-operator" containerID="cri-o://16a29925a3fae16ba0837e4800f840c461b3c1a66b55b6d6dcb58f3e81b7a734" gracePeriod=30 Feb 19 00:12:34 crc kubenswrapper[4825]: I0219 00:12:34.977344 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7g8mt"] Feb 19 00:12:34 crc kubenswrapper[4825]: I0219 00:12:34.977626 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7g8mt" podUID="5430045e-a57c-4dd3-8205-737c277afd00" containerName="registry-server" containerID="cri-o://bca9cf90cc73bbeff04e933b7b9fbf3f890245e994333e7589e12815c67ad093" gracePeriod=30 Feb 19 00:12:34 crc kubenswrapper[4825]: I0219 00:12:34.978624 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zstqt"] Feb 19 00:12:34 crc kubenswrapper[4825]: I0219 00:12:34.978861 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zstqt" podUID="1e372937-4e80-4153-bf75-7811efb6750b" containerName="registry-server" containerID="cri-o://d54011be1ab49d2dab01574e019390566d6f97e95c1f3255b483e22ab5b67f6d" gracePeriod=30 Feb 19 00:12:34 crc kubenswrapper[4825]: I0219 00:12:34.989732 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f5njj"] Feb 19 00:12:34 crc kubenswrapper[4825]: I0219 00:12:34.990790 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f5njj" Feb 19 00:12:34 crc kubenswrapper[4825]: I0219 00:12:34.995279 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f5njj"] Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.128366 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d005d56b-57dc-4399-899f-3e4945f8d94d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f5njj\" (UID: \"d005d56b-57dc-4399-899f-3e4945f8d94d\") " pod="openshift-marketplace/marketplace-operator-79b997595-f5njj" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.128929 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw98s\" (UniqueName: \"kubernetes.io/projected/d005d56b-57dc-4399-899f-3e4945f8d94d-kube-api-access-kw98s\") pod \"marketplace-operator-79b997595-f5njj\" (UID: \"d005d56b-57dc-4399-899f-3e4945f8d94d\") " pod="openshift-marketplace/marketplace-operator-79b997595-f5njj" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.129101 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d005d56b-57dc-4399-899f-3e4945f8d94d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f5njj\" (UID: \"d005d56b-57dc-4399-899f-3e4945f8d94d\") " pod="openshift-marketplace/marketplace-operator-79b997595-f5njj" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.230695 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d005d56b-57dc-4399-899f-3e4945f8d94d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f5njj\" (UID: \"d005d56b-57dc-4399-899f-3e4945f8d94d\") " pod="openshift-marketplace/marketplace-operator-79b997595-f5njj" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.231887 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d005d56b-57dc-4399-899f-3e4945f8d94d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f5njj\" (UID: \"d005d56b-57dc-4399-899f-3e4945f8d94d\") " pod="openshift-marketplace/marketplace-operator-79b997595-f5njj" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.230864 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d005d56b-57dc-4399-899f-3e4945f8d94d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f5njj\" (UID: \"d005d56b-57dc-4399-899f-3e4945f8d94d\") " pod="openshift-marketplace/marketplace-operator-79b997595-f5njj" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.232043 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw98s\" (UniqueName: \"kubernetes.io/projected/d005d56b-57dc-4399-899f-3e4945f8d94d-kube-api-access-kw98s\") pod \"marketplace-operator-79b997595-f5njj\" (UID: \"d005d56b-57dc-4399-899f-3e4945f8d94d\") " pod="openshift-marketplace/marketplace-operator-79b997595-f5njj" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.239465 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d005d56b-57dc-4399-899f-3e4945f8d94d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f5njj\" (UID: \"d005d56b-57dc-4399-899f-3e4945f8d94d\") " pod="openshift-marketplace/marketplace-operator-79b997595-f5njj" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.250961 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw98s\" (UniqueName: \"kubernetes.io/projected/d005d56b-57dc-4399-899f-3e4945f8d94d-kube-api-access-kw98s\") pod \"marketplace-operator-79b997595-f5njj\" (UID: \"d005d56b-57dc-4399-899f-3e4945f8d94d\") " pod="openshift-marketplace/marketplace-operator-79b997595-f5njj" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.323598 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f5njj" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.467417 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.538829 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3e4f27-2ef4-4187-911b-135249a4454f-catalog-content\") pod \"fe3e4f27-2ef4-4187-911b-135249a4454f\" (UID: \"fe3e4f27-2ef4-4187-911b-135249a4454f\") " Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.538907 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg4nd\" (UniqueName: \"kubernetes.io/projected/fe3e4f27-2ef4-4187-911b-135249a4454f-kube-api-access-fg4nd\") pod \"fe3e4f27-2ef4-4187-911b-135249a4454f\" (UID: \"fe3e4f27-2ef4-4187-911b-135249a4454f\") " Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.538965 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3e4f27-2ef4-4187-911b-135249a4454f-utilities\") pod \"fe3e4f27-2ef4-4187-911b-135249a4454f\" (UID: \"fe3e4f27-2ef4-4187-911b-135249a4454f\") " Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.543686 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe3e4f27-2ef4-4187-911b-135249a4454f-utilities" (OuterVolumeSpecName: "utilities") pod "fe3e4f27-2ef4-4187-911b-135249a4454f" (UID: "fe3e4f27-2ef4-4187-911b-135249a4454f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.547196 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3e4f27-2ef4-4187-911b-135249a4454f-kube-api-access-fg4nd" (OuterVolumeSpecName: "kube-api-access-fg4nd") pod "fe3e4f27-2ef4-4187-911b-135249a4454f" (UID: "fe3e4f27-2ef4-4187-911b-135249a4454f"). InnerVolumeSpecName "kube-api-access-fg4nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.548245 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.561320 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.580473 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.598733 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.611671 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe3e4f27-2ef4-4187-911b-135249a4454f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe3e4f27-2ef4-4187-911b-135249a4454f" (UID: "fe3e4f27-2ef4-4187-911b-135249a4454f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.633012 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f5njj"] Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.640101 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76110eca-3d21-477d-9656-965eaa768c21-marketplace-trusted-ca\") pod \"76110eca-3d21-477d-9656-965eaa768c21\" (UID: \"76110eca-3d21-477d-9656-965eaa768c21\") " Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.640178 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqgp2\" (UniqueName: \"kubernetes.io/projected/76110eca-3d21-477d-9656-965eaa768c21-kube-api-access-nqgp2\") pod \"76110eca-3d21-477d-9656-965eaa768c21\" (UID: \"76110eca-3d21-477d-9656-965eaa768c21\") " Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.640198 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76110eca-3d21-477d-9656-965eaa768c21-marketplace-operator-metrics\") pod \"76110eca-3d21-477d-9656-965eaa768c21\" (UID: \"76110eca-3d21-477d-9656-965eaa768c21\") " Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.640267 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lp79\" (UniqueName: \"kubernetes.io/projected/5430045e-a57c-4dd3-8205-737c277afd00-kube-api-access-7lp79\") pod \"5430045e-a57c-4dd3-8205-737c277afd00\" (UID: \"5430045e-a57c-4dd3-8205-737c277afd00\") " Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.640310 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5430045e-a57c-4dd3-8205-737c277afd00-catalog-content\") pod \"5430045e-a57c-4dd3-8205-737c277afd00\" (UID: \"5430045e-a57c-4dd3-8205-737c277afd00\") " Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.640342 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5430045e-a57c-4dd3-8205-737c277afd00-utilities\") pod \"5430045e-a57c-4dd3-8205-737c277afd00\" (UID: \"5430045e-a57c-4dd3-8205-737c277afd00\") " Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.640559 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3e4f27-2ef4-4187-911b-135249a4454f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.640576 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg4nd\" (UniqueName: \"kubernetes.io/projected/fe3e4f27-2ef4-4187-911b-135249a4454f-kube-api-access-fg4nd\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.640587 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3e4f27-2ef4-4187-911b-135249a4454f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.641831 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5430045e-a57c-4dd3-8205-737c277afd00-utilities" (OuterVolumeSpecName: "utilities") pod "5430045e-a57c-4dd3-8205-737c277afd00" (UID: "5430045e-a57c-4dd3-8205-737c277afd00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.641932 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76110eca-3d21-477d-9656-965eaa768c21-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "76110eca-3d21-477d-9656-965eaa768c21" (UID: "76110eca-3d21-477d-9656-965eaa768c21"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.644550 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5430045e-a57c-4dd3-8205-737c277afd00-kube-api-access-7lp79" (OuterVolumeSpecName: "kube-api-access-7lp79") pod "5430045e-a57c-4dd3-8205-737c277afd00" (UID: "5430045e-a57c-4dd3-8205-737c277afd00"). InnerVolumeSpecName "kube-api-access-7lp79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.647449 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76110eca-3d21-477d-9656-965eaa768c21-kube-api-access-nqgp2" (OuterVolumeSpecName: "kube-api-access-nqgp2") pod "76110eca-3d21-477d-9656-965eaa768c21" (UID: "76110eca-3d21-477d-9656-965eaa768c21"). InnerVolumeSpecName "kube-api-access-nqgp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.649564 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76110eca-3d21-477d-9656-965eaa768c21-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "76110eca-3d21-477d-9656-965eaa768c21" (UID: "76110eca-3d21-477d-9656-965eaa768c21"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.665995 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5430045e-a57c-4dd3-8205-737c277afd00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5430045e-a57c-4dd3-8205-737c277afd00" (UID: "5430045e-a57c-4dd3-8205-737c277afd00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.740885 4825 generic.go:334] "Generic (PLEG): container finished" podID="76110eca-3d21-477d-9656-965eaa768c21" containerID="16a29925a3fae16ba0837e4800f840c461b3c1a66b55b6d6dcb58f3e81b7a734" exitCode=0 Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.740963 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" event={"ID":"76110eca-3d21-477d-9656-965eaa768c21","Type":"ContainerDied","Data":"16a29925a3fae16ba0837e4800f840c461b3c1a66b55b6d6dcb58f3e81b7a734"} Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.740968 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.740991 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dc72t" event={"ID":"76110eca-3d21-477d-9656-965eaa768c21","Type":"ContainerDied","Data":"410e6a78a1ce59fb867de82fc2e3a58502978888ef740fced45173bb0b62e706"} Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.741019 4825 scope.go:117] "RemoveContainer" containerID="16a29925a3fae16ba0837e4800f840c461b3c1a66b55b6d6dcb58f3e81b7a734" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.741720 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7139b54-5e59-487a-bbf3-2ac657e5e39d-catalog-content\") pod \"d7139b54-5e59-487a-bbf3-2ac657e5e39d\" (UID: \"d7139b54-5e59-487a-bbf3-2ac657e5e39d\") " Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.741764 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g69f\" (UniqueName: \"kubernetes.io/projected/d7139b54-5e59-487a-bbf3-2ac657e5e39d-kube-api-access-6g69f\") pod \"d7139b54-5e59-487a-bbf3-2ac657e5e39d\" (UID: \"d7139b54-5e59-487a-bbf3-2ac657e5e39d\") " Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.741827 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e372937-4e80-4153-bf75-7811efb6750b-utilities\") pod \"1e372937-4e80-4153-bf75-7811efb6750b\" (UID: \"1e372937-4e80-4153-bf75-7811efb6750b\") " Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.741949 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx2m5\" (UniqueName: \"kubernetes.io/projected/1e372937-4e80-4153-bf75-7811efb6750b-kube-api-access-mx2m5\") pod \"1e372937-4e80-4153-bf75-7811efb6750b\" (UID: \"1e372937-4e80-4153-bf75-7811efb6750b\") " Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.742560 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7139b54-5e59-487a-bbf3-2ac657e5e39d-utilities\") pod \"d7139b54-5e59-487a-bbf3-2ac657e5e39d\" (UID: \"d7139b54-5e59-487a-bbf3-2ac657e5e39d\") " Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.742968 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e372937-4e80-4153-bf75-7811efb6750b-utilities" (OuterVolumeSpecName: "utilities") pod "1e372937-4e80-4153-bf75-7811efb6750b" (UID: "1e372937-4e80-4153-bf75-7811efb6750b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.743345 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e372937-4e80-4153-bf75-7811efb6750b-catalog-content\") pod \"1e372937-4e80-4153-bf75-7811efb6750b\" (UID: \"1e372937-4e80-4153-bf75-7811efb6750b\") " Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.743551 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7139b54-5e59-487a-bbf3-2ac657e5e39d-utilities" (OuterVolumeSpecName: "utilities") pod "d7139b54-5e59-487a-bbf3-2ac657e5e39d" (UID: "d7139b54-5e59-487a-bbf3-2ac657e5e39d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.743821 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7139b54-5e59-487a-bbf3-2ac657e5e39d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.743847 4825 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76110eca-3d21-477d-9656-965eaa768c21-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.743860 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqgp2\" (UniqueName: \"kubernetes.io/projected/76110eca-3d21-477d-9656-965eaa768c21-kube-api-access-nqgp2\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.743870 4825 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76110eca-3d21-477d-9656-965eaa768c21-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.743881 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e372937-4e80-4153-bf75-7811efb6750b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.743890 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lp79\" (UniqueName: \"kubernetes.io/projected/5430045e-a57c-4dd3-8205-737c277afd00-kube-api-access-7lp79\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.743899 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5430045e-a57c-4dd3-8205-737c277afd00-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.744208 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5430045e-a57c-4dd3-8205-737c277afd00-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.745697 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7139b54-5e59-487a-bbf3-2ac657e5e39d-kube-api-access-6g69f" (OuterVolumeSpecName: "kube-api-access-6g69f") pod "d7139b54-5e59-487a-bbf3-2ac657e5e39d" (UID: "d7139b54-5e59-487a-bbf3-2ac657e5e39d"). InnerVolumeSpecName "kube-api-access-6g69f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.746664 4825 generic.go:334] "Generic (PLEG): container finished" podID="fe3e4f27-2ef4-4187-911b-135249a4454f" containerID="f9189a922a11ba5a6f9a2027b230e4be05beccaf2944046b21010c2d1015046b" exitCode=0 Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.746722 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xs8g" event={"ID":"fe3e4f27-2ef4-4187-911b-135249a4454f","Type":"ContainerDied","Data":"f9189a922a11ba5a6f9a2027b230e4be05beccaf2944046b21010c2d1015046b"} Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.746743 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xs8g" event={"ID":"fe3e4f27-2ef4-4187-911b-135249a4454f","Type":"ContainerDied","Data":"b0dd44d98a1759ede0d507f2693ac82c227e5d1abf0f4d29ca8891e360eec11a"} Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.746741 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xs8g" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.746760 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e372937-4e80-4153-bf75-7811efb6750b-kube-api-access-mx2m5" (OuterVolumeSpecName: "kube-api-access-mx2m5") pod "1e372937-4e80-4153-bf75-7811efb6750b" (UID: "1e372937-4e80-4153-bf75-7811efb6750b"). InnerVolumeSpecName "kube-api-access-mx2m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.750786 4825 generic.go:334] "Generic (PLEG): container finished" podID="1e372937-4e80-4153-bf75-7811efb6750b" containerID="d54011be1ab49d2dab01574e019390566d6f97e95c1f3255b483e22ab5b67f6d" exitCode=0 Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.750820 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zstqt" event={"ID":"1e372937-4e80-4153-bf75-7811efb6750b","Type":"ContainerDied","Data":"d54011be1ab49d2dab01574e019390566d6f97e95c1f3255b483e22ab5b67f6d"} Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.750861 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zstqt" event={"ID":"1e372937-4e80-4153-bf75-7811efb6750b","Type":"ContainerDied","Data":"06dd147a8bd97a80361db08b3f90fe62fb9ad08513d927731503f19feea7c616"} Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.751132 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zstqt" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.753005 4825 generic.go:334] "Generic (PLEG): container finished" podID="d7139b54-5e59-487a-bbf3-2ac657e5e39d" containerID="89cbf5b63fa0686f4006122e9eca77e62177ea39bd9355c98d5b6c5152e4fee9" exitCode=0 Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.753072 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zgq6r" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.753071 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgq6r" event={"ID":"d7139b54-5e59-487a-bbf3-2ac657e5e39d","Type":"ContainerDied","Data":"89cbf5b63fa0686f4006122e9eca77e62177ea39bd9355c98d5b6c5152e4fee9"} Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.753188 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zgq6r" event={"ID":"d7139b54-5e59-487a-bbf3-2ac657e5e39d","Type":"ContainerDied","Data":"32a9f035f6af65a419323b823347882027c54740d4788f3e4c1a561853bf65a4"} Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.753978 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f5njj" event={"ID":"d005d56b-57dc-4399-899f-3e4945f8d94d","Type":"ContainerStarted","Data":"0757a4784f4d3ad161a974958a3021432f1a1ac4af2ae51f6079e9d565393e35"} Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.757894 4825 generic.go:334] "Generic (PLEG): container finished" podID="5430045e-a57c-4dd3-8205-737c277afd00" containerID="bca9cf90cc73bbeff04e933b7b9fbf3f890245e994333e7589e12815c67ad093" exitCode=0 Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.757929 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g8mt" event={"ID":"5430045e-a57c-4dd3-8205-737c277afd00","Type":"ContainerDied","Data":"bca9cf90cc73bbeff04e933b7b9fbf3f890245e994333e7589e12815c67ad093"} Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.757953 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g8mt" event={"ID":"5430045e-a57c-4dd3-8205-737c277afd00","Type":"ContainerDied","Data":"3148d6f4a53ddc0a79debdc2e2ad2add1840cd71e1dd6a5c124df60ce7e3c04c"} Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.757961 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7g8mt" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.774962 4825 scope.go:117] "RemoveContainer" containerID="16a29925a3fae16ba0837e4800f840c461b3c1a66b55b6d6dcb58f3e81b7a734" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.781316 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dc72t"] Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.783718 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dc72t"] Feb 19 00:12:35 crc kubenswrapper[4825]: E0219 00:12:35.789933 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16a29925a3fae16ba0837e4800f840c461b3c1a66b55b6d6dcb58f3e81b7a734\": container with ID starting with 16a29925a3fae16ba0837e4800f840c461b3c1a66b55b6d6dcb58f3e81b7a734 not found: ID does not exist" containerID="16a29925a3fae16ba0837e4800f840c461b3c1a66b55b6d6dcb58f3e81b7a734" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.789983 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16a29925a3fae16ba0837e4800f840c461b3c1a66b55b6d6dcb58f3e81b7a734"} err="failed to get container status \"16a29925a3fae16ba0837e4800f840c461b3c1a66b55b6d6dcb58f3e81b7a734\": rpc error: code = NotFound desc = could not find container \"16a29925a3fae16ba0837e4800f840c461b3c1a66b55b6d6dcb58f3e81b7a734\": container with ID starting with 16a29925a3fae16ba0837e4800f840c461b3c1a66b55b6d6dcb58f3e81b7a734 not found: ID does not exist" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.790007 4825 scope.go:117] "RemoveContainer" containerID="f9189a922a11ba5a6f9a2027b230e4be05beccaf2944046b21010c2d1015046b" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.816130 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8xs8g"] Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.820705 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8xs8g"] Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.824649 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7139b54-5e59-487a-bbf3-2ac657e5e39d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7139b54-5e59-487a-bbf3-2ac657e5e39d" (UID: "d7139b54-5e59-487a-bbf3-2ac657e5e39d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.839134 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7g8mt"] Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.845360 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7139b54-5e59-487a-bbf3-2ac657e5e39d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.845408 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g69f\" (UniqueName: \"kubernetes.io/projected/d7139b54-5e59-487a-bbf3-2ac657e5e39d-kube-api-access-6g69f\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.845419 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx2m5\" (UniqueName: \"kubernetes.io/projected/1e372937-4e80-4153-bf75-7811efb6750b-kube-api-access-mx2m5\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.853603 4825 scope.go:117] "RemoveContainer" containerID="1561bde9a019ad75990b3e3e9356542b9f7b39c928e474658a77a9e608ae4c02" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.860499 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7g8mt"] Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.869143 4825 scope.go:117] "RemoveContainer" containerID="04495eea170b3b6ac7d6de27ddc9c4d259cae7e66e02a1d5d08aa8a23a9c3dae" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.886587 4825 scope.go:117] "RemoveContainer" containerID="f9189a922a11ba5a6f9a2027b230e4be05beccaf2944046b21010c2d1015046b" Feb 19 00:12:35 crc kubenswrapper[4825]: E0219 00:12:35.887018 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9189a922a11ba5a6f9a2027b230e4be05beccaf2944046b21010c2d1015046b\": container with ID starting with f9189a922a11ba5a6f9a2027b230e4be05beccaf2944046b21010c2d1015046b not found: ID does not exist" containerID="f9189a922a11ba5a6f9a2027b230e4be05beccaf2944046b21010c2d1015046b" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.887054 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9189a922a11ba5a6f9a2027b230e4be05beccaf2944046b21010c2d1015046b"} err="failed to get container status \"f9189a922a11ba5a6f9a2027b230e4be05beccaf2944046b21010c2d1015046b\": rpc error: code = NotFound desc = could not find container \"f9189a922a11ba5a6f9a2027b230e4be05beccaf2944046b21010c2d1015046b\": container with ID starting with f9189a922a11ba5a6f9a2027b230e4be05beccaf2944046b21010c2d1015046b not found: ID does not exist" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.887074 4825 scope.go:117] "RemoveContainer" containerID="1561bde9a019ad75990b3e3e9356542b9f7b39c928e474658a77a9e608ae4c02" Feb 19 00:12:35 crc kubenswrapper[4825]: E0219 00:12:35.887382 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1561bde9a019ad75990b3e3e9356542b9f7b39c928e474658a77a9e608ae4c02\": container with ID starting with 1561bde9a019ad75990b3e3e9356542b9f7b39c928e474658a77a9e608ae4c02 not found: ID does not exist" containerID="1561bde9a019ad75990b3e3e9356542b9f7b39c928e474658a77a9e608ae4c02" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.887400 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1561bde9a019ad75990b3e3e9356542b9f7b39c928e474658a77a9e608ae4c02"} err="failed to get container status \"1561bde9a019ad75990b3e3e9356542b9f7b39c928e474658a77a9e608ae4c02\": rpc error: code = NotFound desc = could not find container \"1561bde9a019ad75990b3e3e9356542b9f7b39c928e474658a77a9e608ae4c02\": container with ID starting with 1561bde9a019ad75990b3e3e9356542b9f7b39c928e474658a77a9e608ae4c02 not found: ID does not exist" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.887411 4825 scope.go:117] "RemoveContainer" containerID="04495eea170b3b6ac7d6de27ddc9c4d259cae7e66e02a1d5d08aa8a23a9c3dae" Feb 19 00:12:35 crc kubenswrapper[4825]: E0219 00:12:35.887637 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04495eea170b3b6ac7d6de27ddc9c4d259cae7e66e02a1d5d08aa8a23a9c3dae\": container with ID starting with 04495eea170b3b6ac7d6de27ddc9c4d259cae7e66e02a1d5d08aa8a23a9c3dae not found: ID does not exist" containerID="04495eea170b3b6ac7d6de27ddc9c4d259cae7e66e02a1d5d08aa8a23a9c3dae" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.887667 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04495eea170b3b6ac7d6de27ddc9c4d259cae7e66e02a1d5d08aa8a23a9c3dae"} err="failed to get container status \"04495eea170b3b6ac7d6de27ddc9c4d259cae7e66e02a1d5d08aa8a23a9c3dae\": rpc error: code = NotFound desc = could not find container \"04495eea170b3b6ac7d6de27ddc9c4d259cae7e66e02a1d5d08aa8a23a9c3dae\": container with ID starting with 04495eea170b3b6ac7d6de27ddc9c4d259cae7e66e02a1d5d08aa8a23a9c3dae not found: ID does not exist" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.887686 4825 scope.go:117] "RemoveContainer" containerID="d54011be1ab49d2dab01574e019390566d6f97e95c1f3255b483e22ab5b67f6d" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.899773 4825 scope.go:117] "RemoveContainer" containerID="309f82afc47a670752eed0317c295282f4b7719b5e63bc0c01ae1aa3572e4fc8" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.900187 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e372937-4e80-4153-bf75-7811efb6750b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e372937-4e80-4153-bf75-7811efb6750b" (UID: "1e372937-4e80-4153-bf75-7811efb6750b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.918209 4825 scope.go:117] "RemoveContainer" containerID="cfeec4b06ba12621c3c01feed920d86944180ac4b996dc78a20a6928d56fac43" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.932162 4825 scope.go:117] "RemoveContainer" containerID="d54011be1ab49d2dab01574e019390566d6f97e95c1f3255b483e22ab5b67f6d" Feb 19 00:12:35 crc kubenswrapper[4825]: E0219 00:12:35.932741 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d54011be1ab49d2dab01574e019390566d6f97e95c1f3255b483e22ab5b67f6d\": container with ID starting with d54011be1ab49d2dab01574e019390566d6f97e95c1f3255b483e22ab5b67f6d not found: ID does not exist" containerID="d54011be1ab49d2dab01574e019390566d6f97e95c1f3255b483e22ab5b67f6d" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.932783 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d54011be1ab49d2dab01574e019390566d6f97e95c1f3255b483e22ab5b67f6d"} err="failed to get container status \"d54011be1ab49d2dab01574e019390566d6f97e95c1f3255b483e22ab5b67f6d\": rpc error: code = NotFound desc = could not find container \"d54011be1ab49d2dab01574e019390566d6f97e95c1f3255b483e22ab5b67f6d\": container with ID starting with d54011be1ab49d2dab01574e019390566d6f97e95c1f3255b483e22ab5b67f6d not found: ID does not exist" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.932803 4825 scope.go:117] "RemoveContainer" containerID="309f82afc47a670752eed0317c295282f4b7719b5e63bc0c01ae1aa3572e4fc8" Feb 19 00:12:35 crc kubenswrapper[4825]: E0219 00:12:35.933129 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"309f82afc47a670752eed0317c295282f4b7719b5e63bc0c01ae1aa3572e4fc8\": container with ID starting with 309f82afc47a670752eed0317c295282f4b7719b5e63bc0c01ae1aa3572e4fc8 not found: ID does not exist" containerID="309f82afc47a670752eed0317c295282f4b7719b5e63bc0c01ae1aa3572e4fc8" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.933150 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309f82afc47a670752eed0317c295282f4b7719b5e63bc0c01ae1aa3572e4fc8"} err="failed to get container status \"309f82afc47a670752eed0317c295282f4b7719b5e63bc0c01ae1aa3572e4fc8\": rpc error: code = NotFound desc = could not find container \"309f82afc47a670752eed0317c295282f4b7719b5e63bc0c01ae1aa3572e4fc8\": container with ID starting with 309f82afc47a670752eed0317c295282f4b7719b5e63bc0c01ae1aa3572e4fc8 not found: ID does not exist" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.933169 4825 scope.go:117] "RemoveContainer" containerID="cfeec4b06ba12621c3c01feed920d86944180ac4b996dc78a20a6928d56fac43" Feb 19 00:12:35 crc kubenswrapper[4825]: E0219 00:12:35.933832 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfeec4b06ba12621c3c01feed920d86944180ac4b996dc78a20a6928d56fac43\": container with ID starting with cfeec4b06ba12621c3c01feed920d86944180ac4b996dc78a20a6928d56fac43 not found: ID does not exist" containerID="cfeec4b06ba12621c3c01feed920d86944180ac4b996dc78a20a6928d56fac43" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.933854 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfeec4b06ba12621c3c01feed920d86944180ac4b996dc78a20a6928d56fac43"} err="failed to get container status \"cfeec4b06ba12621c3c01feed920d86944180ac4b996dc78a20a6928d56fac43\": rpc error: code = NotFound desc = could not find container \"cfeec4b06ba12621c3c01feed920d86944180ac4b996dc78a20a6928d56fac43\": container with ID starting with cfeec4b06ba12621c3c01feed920d86944180ac4b996dc78a20a6928d56fac43 not found: ID does not exist" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.933867 4825 scope.go:117] "RemoveContainer" containerID="89cbf5b63fa0686f4006122e9eca77e62177ea39bd9355c98d5b6c5152e4fee9" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.945983 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e372937-4e80-4153-bf75-7811efb6750b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.948253 4825 scope.go:117] "RemoveContainer" containerID="8dbf92b3b5b467be6a49aebd1b232ff539cb6f3e3720b1290326cf4432a64be4" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.963477 4825 scope.go:117] "RemoveContainer" containerID="8e61f6547fd764655955db08e841b5b1810a87f4ca40cf52cbd59da2946a99f9" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.980196 4825 scope.go:117] "RemoveContainer" containerID="89cbf5b63fa0686f4006122e9eca77e62177ea39bd9355c98d5b6c5152e4fee9" Feb 19 00:12:35 crc kubenswrapper[4825]: E0219 00:12:35.980684 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89cbf5b63fa0686f4006122e9eca77e62177ea39bd9355c98d5b6c5152e4fee9\": container with ID starting with 89cbf5b63fa0686f4006122e9eca77e62177ea39bd9355c98d5b6c5152e4fee9 not found: ID does not exist" containerID="89cbf5b63fa0686f4006122e9eca77e62177ea39bd9355c98d5b6c5152e4fee9" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.980731 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89cbf5b63fa0686f4006122e9eca77e62177ea39bd9355c98d5b6c5152e4fee9"} err="failed to get container status \"89cbf5b63fa0686f4006122e9eca77e62177ea39bd9355c98d5b6c5152e4fee9\": rpc error: code = NotFound desc = could not find container \"89cbf5b63fa0686f4006122e9eca77e62177ea39bd9355c98d5b6c5152e4fee9\": container with ID starting with 89cbf5b63fa0686f4006122e9eca77e62177ea39bd9355c98d5b6c5152e4fee9 not found: ID does not exist" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.980756 4825 scope.go:117] "RemoveContainer" containerID="8dbf92b3b5b467be6a49aebd1b232ff539cb6f3e3720b1290326cf4432a64be4" Feb 19 00:12:35 crc kubenswrapper[4825]: E0219 00:12:35.981053 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dbf92b3b5b467be6a49aebd1b232ff539cb6f3e3720b1290326cf4432a64be4\": container with ID starting with 8dbf92b3b5b467be6a49aebd1b232ff539cb6f3e3720b1290326cf4432a64be4 not found: ID does not exist" containerID="8dbf92b3b5b467be6a49aebd1b232ff539cb6f3e3720b1290326cf4432a64be4" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.981086 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dbf92b3b5b467be6a49aebd1b232ff539cb6f3e3720b1290326cf4432a64be4"} err="failed to get container status \"8dbf92b3b5b467be6a49aebd1b232ff539cb6f3e3720b1290326cf4432a64be4\": rpc error: code = NotFound desc = could not find container \"8dbf92b3b5b467be6a49aebd1b232ff539cb6f3e3720b1290326cf4432a64be4\": container with ID starting with 8dbf92b3b5b467be6a49aebd1b232ff539cb6f3e3720b1290326cf4432a64be4 not found: ID does not exist" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.981108 4825 scope.go:117] "RemoveContainer" containerID="8e61f6547fd764655955db08e841b5b1810a87f4ca40cf52cbd59da2946a99f9" Feb 19 00:12:35 crc kubenswrapper[4825]: E0219 00:12:35.981312 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e61f6547fd764655955db08e841b5b1810a87f4ca40cf52cbd59da2946a99f9\": container with ID starting with 8e61f6547fd764655955db08e841b5b1810a87f4ca40cf52cbd59da2946a99f9 not found: ID does not exist" containerID="8e61f6547fd764655955db08e841b5b1810a87f4ca40cf52cbd59da2946a99f9" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.981336 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e61f6547fd764655955db08e841b5b1810a87f4ca40cf52cbd59da2946a99f9"} err="failed to get container status \"8e61f6547fd764655955db08e841b5b1810a87f4ca40cf52cbd59da2946a99f9\": rpc error: code = NotFound desc = could not find container \"8e61f6547fd764655955db08e841b5b1810a87f4ca40cf52cbd59da2946a99f9\": container with ID starting with 8e61f6547fd764655955db08e841b5b1810a87f4ca40cf52cbd59da2946a99f9 not found: ID does not exist" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.981348 4825 scope.go:117] "RemoveContainer" containerID="bca9cf90cc73bbeff04e933b7b9fbf3f890245e994333e7589e12815c67ad093" Feb 19 00:12:35 crc kubenswrapper[4825]: I0219 00:12:35.993880 4825 scope.go:117] "RemoveContainer" containerID="6396cfaf1add662e4b8c55893c8081d9fd341171c5e41103d556652b9705777a" Feb 19 00:12:36 crc kubenswrapper[4825]: I0219 00:12:36.008355 4825 scope.go:117] "RemoveContainer" containerID="d4351e7bd44789a11607204315b513bf90b99893453154b5a3175cbaafe60ee4" Feb 19 00:12:36 crc kubenswrapper[4825]: I0219 00:12:36.023057 4825 scope.go:117] "RemoveContainer" containerID="bca9cf90cc73bbeff04e933b7b9fbf3f890245e994333e7589e12815c67ad093" Feb 19 00:12:36 crc kubenswrapper[4825]: E0219 00:12:36.023750 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bca9cf90cc73bbeff04e933b7b9fbf3f890245e994333e7589e12815c67ad093\": container with ID starting with bca9cf90cc73bbeff04e933b7b9fbf3f890245e994333e7589e12815c67ad093 not found: ID does not exist" containerID="bca9cf90cc73bbeff04e933b7b9fbf3f890245e994333e7589e12815c67ad093" Feb 19 00:12:36 crc kubenswrapper[4825]: I0219 00:12:36.023789 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca9cf90cc73bbeff04e933b7b9fbf3f890245e994333e7589e12815c67ad093"} err="failed to get container status \"bca9cf90cc73bbeff04e933b7b9fbf3f890245e994333e7589e12815c67ad093\": rpc error: code = NotFound desc = could not find container \"bca9cf90cc73bbeff04e933b7b9fbf3f890245e994333e7589e12815c67ad093\": container with ID starting with bca9cf90cc73bbeff04e933b7b9fbf3f890245e994333e7589e12815c67ad093 not found: ID does not exist" Feb 19 00:12:36 crc kubenswrapper[4825]: I0219 00:12:36.023813 4825 scope.go:117] "RemoveContainer" containerID="6396cfaf1add662e4b8c55893c8081d9fd341171c5e41103d556652b9705777a" Feb 19 00:12:36 crc kubenswrapper[4825]: E0219 00:12:36.024048 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6396cfaf1add662e4b8c55893c8081d9fd341171c5e41103d556652b9705777a\": container with ID starting with 6396cfaf1add662e4b8c55893c8081d9fd341171c5e41103d556652b9705777a not found: ID does not exist" containerID="6396cfaf1add662e4b8c55893c8081d9fd341171c5e41103d556652b9705777a" Feb 19 00:12:36 crc kubenswrapper[4825]: I0219 00:12:36.024072 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6396cfaf1add662e4b8c55893c8081d9fd341171c5e41103d556652b9705777a"} err="failed to get container status \"6396cfaf1add662e4b8c55893c8081d9fd341171c5e41103d556652b9705777a\": rpc error: code = NotFound desc = could not find container \"6396cfaf1add662e4b8c55893c8081d9fd341171c5e41103d556652b9705777a\": container with ID starting with 6396cfaf1add662e4b8c55893c8081d9fd341171c5e41103d556652b9705777a not found: ID does not exist" Feb 19 00:12:36 crc kubenswrapper[4825]: I0219 00:12:36.024085 4825 scope.go:117] "RemoveContainer" containerID="d4351e7bd44789a11607204315b513bf90b99893453154b5a3175cbaafe60ee4" Feb 19 00:12:36 crc kubenswrapper[4825]: E0219 00:12:36.024443 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4351e7bd44789a11607204315b513bf90b99893453154b5a3175cbaafe60ee4\": container with ID starting with d4351e7bd44789a11607204315b513bf90b99893453154b5a3175cbaafe60ee4 not found: ID does not exist" containerID="d4351e7bd44789a11607204315b513bf90b99893453154b5a3175cbaafe60ee4" Feb 19 00:12:36 crc kubenswrapper[4825]: I0219 00:12:36.024466 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4351e7bd44789a11607204315b513bf90b99893453154b5a3175cbaafe60ee4"} err="failed to get container status \"d4351e7bd44789a11607204315b513bf90b99893453154b5a3175cbaafe60ee4\": rpc error: code = NotFound desc = could not find container \"d4351e7bd44789a11607204315b513bf90b99893453154b5a3175cbaafe60ee4\": container with ID starting with d4351e7bd44789a11607204315b513bf90b99893453154b5a3175cbaafe60ee4 not found: ID does not exist" Feb 19 00:12:36 crc kubenswrapper[4825]: I0219 00:12:36.086556 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zstqt"] Feb 19 00:12:36 crc kubenswrapper[4825]: I0219 00:12:36.088975 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zstqt"] Feb 19 00:12:36 crc kubenswrapper[4825]: I0219 00:12:36.133996 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zgq6r"] Feb 19 00:12:36 crc kubenswrapper[4825]: I0219 00:12:36.136834 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zgq6r"] Feb 19 00:12:36 crc kubenswrapper[4825]: I0219 00:12:36.769325 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f5njj" event={"ID":"d005d56b-57dc-4399-899f-3e4945f8d94d","Type":"ContainerStarted","Data":"883e833188e19b63cf5b4c5b81c777b68ed6697bb2eaa516299c8bb5704673fe"} Feb 19 00:12:36 crc kubenswrapper[4825]: I0219 00:12:36.788464 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-f5njj" podStartSLOduration=2.788444606 podStartE2EDuration="2.788444606s" podCreationTimestamp="2026-02-19 00:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:36.784186087 +0000 UTC m=+302.475152134" watchObservedRunningTime="2026-02-19 00:12:36.788444606 +0000 UTC m=+302.479410653" Feb 19 00:12:37 crc kubenswrapper[4825]: I0219 00:12:37.077891 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e372937-4e80-4153-bf75-7811efb6750b" path="/var/lib/kubelet/pods/1e372937-4e80-4153-bf75-7811efb6750b/volumes" Feb 19 00:12:37 crc kubenswrapper[4825]: I0219 00:12:37.080017 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5430045e-a57c-4dd3-8205-737c277afd00" path="/var/lib/kubelet/pods/5430045e-a57c-4dd3-8205-737c277afd00/volumes" Feb 19 00:12:37 crc kubenswrapper[4825]: I0219 00:12:37.081384 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76110eca-3d21-477d-9656-965eaa768c21" path="/var/lib/kubelet/pods/76110eca-3d21-477d-9656-965eaa768c21/volumes" Feb 19 00:12:37 crc kubenswrapper[4825]: I0219 00:12:37.083681 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7139b54-5e59-487a-bbf3-2ac657e5e39d" path="/var/lib/kubelet/pods/d7139b54-5e59-487a-bbf3-2ac657e5e39d/volumes" Feb 19 00:12:37 crc kubenswrapper[4825]: I0219 00:12:37.085069 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe3e4f27-2ef4-4187-911b-135249a4454f" path="/var/lib/kubelet/pods/fe3e4f27-2ef4-4187-911b-135249a4454f/volumes" Feb 19 00:12:37 crc kubenswrapper[4825]: I0219 00:12:37.778153 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-f5njj" Feb 19 00:12:37 crc kubenswrapper[4825]: I0219 00:12:37.786201 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-f5njj" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.784954 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jp9cc"] Feb 19 00:13:10 crc kubenswrapper[4825]: E0219 00:13:10.786825 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e372937-4e80-4153-bf75-7811efb6750b" containerName="registry-server" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.786855 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e372937-4e80-4153-bf75-7811efb6750b" containerName="registry-server" Feb 19 00:13:10 crc kubenswrapper[4825]: E0219 00:13:10.786875 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5430045e-a57c-4dd3-8205-737c277afd00" containerName="extract-content" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.786884 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="5430045e-a57c-4dd3-8205-737c277afd00" containerName="extract-content" Feb 19 00:13:10 crc kubenswrapper[4825]: E0219 00:13:10.786897 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3e4f27-2ef4-4187-911b-135249a4454f" containerName="extract-content" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.786907 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3e4f27-2ef4-4187-911b-135249a4454f" containerName="extract-content" Feb 19 00:13:10 crc kubenswrapper[4825]: E0219 00:13:10.786919 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e372937-4e80-4153-bf75-7811efb6750b" containerName="extract-utilities" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.786928 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e372937-4e80-4153-bf75-7811efb6750b" containerName="extract-utilities" Feb 19 00:13:10 crc kubenswrapper[4825]: E0219 00:13:10.786945 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e372937-4e80-4153-bf75-7811efb6750b" containerName="extract-content" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.786954 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e372937-4e80-4153-bf75-7811efb6750b" containerName="extract-content" Feb 19 00:13:10 crc kubenswrapper[4825]: E0219 00:13:10.786967 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3e4f27-2ef4-4187-911b-135249a4454f" containerName="registry-server" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.786975 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3e4f27-2ef4-4187-911b-135249a4454f" containerName="registry-server" Feb 19 00:13:10 crc kubenswrapper[4825]: E0219 00:13:10.786990 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7139b54-5e59-487a-bbf3-2ac657e5e39d" containerName="extract-utilities" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.786998 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7139b54-5e59-487a-bbf3-2ac657e5e39d" containerName="extract-utilities" Feb 19 00:13:10 crc kubenswrapper[4825]: E0219 00:13:10.787008 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3e4f27-2ef4-4187-911b-135249a4454f" containerName="extract-utilities" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.787016 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3e4f27-2ef4-4187-911b-135249a4454f" containerName="extract-utilities" Feb 19 00:13:10 crc kubenswrapper[4825]: E0219 00:13:10.787029 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5430045e-a57c-4dd3-8205-737c277afd00" containerName="extract-utilities" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.787037 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="5430045e-a57c-4dd3-8205-737c277afd00" containerName="extract-utilities" Feb 19 00:13:10 crc kubenswrapper[4825]: E0219 00:13:10.787048 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5430045e-a57c-4dd3-8205-737c277afd00" containerName="registry-server" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.787056 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="5430045e-a57c-4dd3-8205-737c277afd00" containerName="registry-server" Feb 19 00:13:10 crc kubenswrapper[4825]: E0219 00:13:10.787069 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7139b54-5e59-487a-bbf3-2ac657e5e39d" containerName="registry-server" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.787078 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7139b54-5e59-487a-bbf3-2ac657e5e39d" containerName="registry-server" Feb 19 00:13:10 crc kubenswrapper[4825]: E0219 00:13:10.787090 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76110eca-3d21-477d-9656-965eaa768c21" containerName="marketplace-operator" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.787098 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="76110eca-3d21-477d-9656-965eaa768c21" containerName="marketplace-operator" Feb 19 00:13:10 crc kubenswrapper[4825]: E0219 00:13:10.787107 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7139b54-5e59-487a-bbf3-2ac657e5e39d" containerName="extract-content" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.787116 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7139b54-5e59-487a-bbf3-2ac657e5e39d" containerName="extract-content" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.787216 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="76110eca-3d21-477d-9656-965eaa768c21" containerName="marketplace-operator" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.787232 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3e4f27-2ef4-4187-911b-135249a4454f" containerName="registry-server" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.787243 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7139b54-5e59-487a-bbf3-2ac657e5e39d" containerName="registry-server" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.787260 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="5430045e-a57c-4dd3-8205-737c277afd00" containerName="registry-server" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.787273 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e372937-4e80-4153-bf75-7811efb6750b" containerName="registry-server" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.788145 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jp9cc" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.791925 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.799046 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jp9cc"] Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.901667 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kjk8\" (UniqueName: \"kubernetes.io/projected/9ec19de0-0a04-435c-b93d-ed4231a4cce4-kube-api-access-9kjk8\") pod \"certified-operators-jp9cc\" (UID: \"9ec19de0-0a04-435c-b93d-ed4231a4cce4\") " pod="openshift-marketplace/certified-operators-jp9cc" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.902006 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec19de0-0a04-435c-b93d-ed4231a4cce4-catalog-content\") pod \"certified-operators-jp9cc\" (UID: \"9ec19de0-0a04-435c-b93d-ed4231a4cce4\") " pod="openshift-marketplace/certified-operators-jp9cc" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.902443 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec19de0-0a04-435c-b93d-ed4231a4cce4-utilities\") pod \"certified-operators-jp9cc\" (UID: \"9ec19de0-0a04-435c-b93d-ed4231a4cce4\") " pod="openshift-marketplace/certified-operators-jp9cc" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.966598 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d5t5x"] Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.970134 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5t5x" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.975009 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 00:13:10 crc kubenswrapper[4825]: I0219 00:13:10.987752 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5t5x"] Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.004093 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8961fc7-6b71-4345-9d73-db95ac0b0627-catalog-content\") pod \"community-operators-d5t5x\" (UID: \"a8961fc7-6b71-4345-9d73-db95ac0b0627\") " pod="openshift-marketplace/community-operators-d5t5x" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.004343 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8961fc7-6b71-4345-9d73-db95ac0b0627-utilities\") pod \"community-operators-d5t5x\" (UID: \"a8961fc7-6b71-4345-9d73-db95ac0b0627\") " pod="openshift-marketplace/community-operators-d5t5x" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.004412 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2ccj\" (UniqueName: \"kubernetes.io/projected/a8961fc7-6b71-4345-9d73-db95ac0b0627-kube-api-access-w2ccj\") pod \"community-operators-d5t5x\" (UID: \"a8961fc7-6b71-4345-9d73-db95ac0b0627\") " pod="openshift-marketplace/community-operators-d5t5x" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.004483 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec19de0-0a04-435c-b93d-ed4231a4cce4-utilities\") pod \"certified-operators-jp9cc\" (UID: \"9ec19de0-0a04-435c-b93d-ed4231a4cce4\") " pod="openshift-marketplace/certified-operators-jp9cc" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.004631 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kjk8\" (UniqueName: \"kubernetes.io/projected/9ec19de0-0a04-435c-b93d-ed4231a4cce4-kube-api-access-9kjk8\") pod \"certified-operators-jp9cc\" (UID: \"9ec19de0-0a04-435c-b93d-ed4231a4cce4\") " pod="openshift-marketplace/certified-operators-jp9cc" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.004918 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec19de0-0a04-435c-b93d-ed4231a4cce4-utilities\") pod \"certified-operators-jp9cc\" (UID: \"9ec19de0-0a04-435c-b93d-ed4231a4cce4\") " pod="openshift-marketplace/certified-operators-jp9cc" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.005331 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec19de0-0a04-435c-b93d-ed4231a4cce4-catalog-content\") pod \"certified-operators-jp9cc\" (UID: \"9ec19de0-0a04-435c-b93d-ed4231a4cce4\") " pod="openshift-marketplace/certified-operators-jp9cc" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.005842 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec19de0-0a04-435c-b93d-ed4231a4cce4-catalog-content\") pod \"certified-operators-jp9cc\" (UID: \"9ec19de0-0a04-435c-b93d-ed4231a4cce4\") " pod="openshift-marketplace/certified-operators-jp9cc" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.027720 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kjk8\" (UniqueName: \"kubernetes.io/projected/9ec19de0-0a04-435c-b93d-ed4231a4cce4-kube-api-access-9kjk8\") pod \"certified-operators-jp9cc\" (UID: \"9ec19de0-0a04-435c-b93d-ed4231a4cce4\") " pod="openshift-marketplace/certified-operators-jp9cc" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.107494 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8961fc7-6b71-4345-9d73-db95ac0b0627-catalog-content\") pod \"community-operators-d5t5x\" (UID: \"a8961fc7-6b71-4345-9d73-db95ac0b0627\") " pod="openshift-marketplace/community-operators-d5t5x" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.107698 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8961fc7-6b71-4345-9d73-db95ac0b0627-utilities\") pod \"community-operators-d5t5x\" (UID: \"a8961fc7-6b71-4345-9d73-db95ac0b0627\") " pod="openshift-marketplace/community-operators-d5t5x" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.107736 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ccj\" (UniqueName: \"kubernetes.io/projected/a8961fc7-6b71-4345-9d73-db95ac0b0627-kube-api-access-w2ccj\") pod \"community-operators-d5t5x\" (UID: \"a8961fc7-6b71-4345-9d73-db95ac0b0627\") " pod="openshift-marketplace/community-operators-d5t5x" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.108376 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8961fc7-6b71-4345-9d73-db95ac0b0627-utilities\") pod \"community-operators-d5t5x\" (UID: \"a8961fc7-6b71-4345-9d73-db95ac0b0627\") " pod="openshift-marketplace/community-operators-d5t5x" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.108449 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8961fc7-6b71-4345-9d73-db95ac0b0627-catalog-content\") pod \"community-operators-d5t5x\" (UID: \"a8961fc7-6b71-4345-9d73-db95ac0b0627\") " pod="openshift-marketplace/community-operators-d5t5x" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.130134 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2ccj\" (UniqueName: \"kubernetes.io/projected/a8961fc7-6b71-4345-9d73-db95ac0b0627-kube-api-access-w2ccj\") pod \"community-operators-d5t5x\" (UID: \"a8961fc7-6b71-4345-9d73-db95ac0b0627\") " pod="openshift-marketplace/community-operators-d5t5x" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.143913 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jp9cc" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.296322 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5t5x" Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.498205 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5t5x"] Feb 19 00:13:11 crc kubenswrapper[4825]: I0219 00:13:11.557469 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jp9cc"] Feb 19 00:13:11 crc kubenswrapper[4825]: W0219 00:13:11.565811 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ec19de0_0a04_435c_b93d_ed4231a4cce4.slice/crio-ca98b65017d1e9cf4d96793b8855ee8a7cd2c59cde3be613ee449f980cf78271 WatchSource:0}: Error finding container ca98b65017d1e9cf4d96793b8855ee8a7cd2c59cde3be613ee449f980cf78271: Status 404 returned error can't find the container with id ca98b65017d1e9cf4d96793b8855ee8a7cd2c59cde3be613ee449f980cf78271 Feb 19 00:13:12 crc kubenswrapper[4825]: I0219 00:13:12.087328 4825 generic.go:334] "Generic (PLEG): container finished" podID="9ec19de0-0a04-435c-b93d-ed4231a4cce4" containerID="af43d5d60bd4095584cb6fa4682ff5a2d22a578e01dd9a990f56200e47d938e8" exitCode=0 Feb 19 00:13:12 crc kubenswrapper[4825]: I0219 00:13:12.087409 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jp9cc" event={"ID":"9ec19de0-0a04-435c-b93d-ed4231a4cce4","Type":"ContainerDied","Data":"af43d5d60bd4095584cb6fa4682ff5a2d22a578e01dd9a990f56200e47d938e8"} Feb 19 00:13:12 crc kubenswrapper[4825]: I0219 00:13:12.088060 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jp9cc" event={"ID":"9ec19de0-0a04-435c-b93d-ed4231a4cce4","Type":"ContainerStarted","Data":"ca98b65017d1e9cf4d96793b8855ee8a7cd2c59cde3be613ee449f980cf78271"} Feb 19 00:13:12 crc kubenswrapper[4825]: I0219 00:13:12.091440 4825 generic.go:334] "Generic (PLEG): container finished" podID="a8961fc7-6b71-4345-9d73-db95ac0b0627" containerID="8c7f4c2177baac192ac9f333122e01cc80ba83daf0993795285139dfa854b1bf" exitCode=0 Feb 19 00:13:12 crc kubenswrapper[4825]: I0219 00:13:12.091569 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5t5x" event={"ID":"a8961fc7-6b71-4345-9d73-db95ac0b0627","Type":"ContainerDied","Data":"8c7f4c2177baac192ac9f333122e01cc80ba83daf0993795285139dfa854b1bf"} Feb 19 00:13:12 crc kubenswrapper[4825]: I0219 00:13:12.091656 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5t5x" event={"ID":"a8961fc7-6b71-4345-9d73-db95ac0b0627","Type":"ContainerStarted","Data":"48391633bd66e310674ae8c4b00e90242f4a27022e730b993a566f1eca1c8670"} Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.098032 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jp9cc" event={"ID":"9ec19de0-0a04-435c-b93d-ed4231a4cce4","Type":"ContainerStarted","Data":"2212b8b141bcd716a3ebb1beb40fcd2c57fd355897144a57165b66704e067b02"} Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.100834 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5t5x" event={"ID":"a8961fc7-6b71-4345-9d73-db95ac0b0627","Type":"ContainerStarted","Data":"8c34df86948ac286fcb5dc88ca4a9b92aab9257349a17187d68e54fba8125999"} Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.157145 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bjxd7"] Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.158602 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.160644 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.166785 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjxd7"] Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.240702 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwbjw\" (UniqueName: \"kubernetes.io/projected/af9e7a92-e527-4a28-a75c-fac2d38484d7-kube-api-access-pwbjw\") pod \"redhat-marketplace-bjxd7\" (UID: \"af9e7a92-e527-4a28-a75c-fac2d38484d7\") " pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.240923 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af9e7a92-e527-4a28-a75c-fac2d38484d7-catalog-content\") pod \"redhat-marketplace-bjxd7\" (UID: \"af9e7a92-e527-4a28-a75c-fac2d38484d7\") " pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.241012 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af9e7a92-e527-4a28-a75c-fac2d38484d7-utilities\") pod \"redhat-marketplace-bjxd7\" (UID: \"af9e7a92-e527-4a28-a75c-fac2d38484d7\") " pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.342144 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af9e7a92-e527-4a28-a75c-fac2d38484d7-catalog-content\") pod \"redhat-marketplace-bjxd7\" (UID: \"af9e7a92-e527-4a28-a75c-fac2d38484d7\") " pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.342187 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af9e7a92-e527-4a28-a75c-fac2d38484d7-utilities\") pod \"redhat-marketplace-bjxd7\" (UID: \"af9e7a92-e527-4a28-a75c-fac2d38484d7\") " pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.342224 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwbjw\" (UniqueName: \"kubernetes.io/projected/af9e7a92-e527-4a28-a75c-fac2d38484d7-kube-api-access-pwbjw\") pod \"redhat-marketplace-bjxd7\" (UID: \"af9e7a92-e527-4a28-a75c-fac2d38484d7\") " pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.342884 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af9e7a92-e527-4a28-a75c-fac2d38484d7-utilities\") pod \"redhat-marketplace-bjxd7\" (UID: \"af9e7a92-e527-4a28-a75c-fac2d38484d7\") " pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.343225 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af9e7a92-e527-4a28-a75c-fac2d38484d7-catalog-content\") pod \"redhat-marketplace-bjxd7\" (UID: \"af9e7a92-e527-4a28-a75c-fac2d38484d7\") " pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.362368 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-djjk4"] Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.363358 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-djjk4" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.363801 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwbjw\" (UniqueName: \"kubernetes.io/projected/af9e7a92-e527-4a28-a75c-fac2d38484d7-kube-api-access-pwbjw\") pod \"redhat-marketplace-bjxd7\" (UID: \"af9e7a92-e527-4a28-a75c-fac2d38484d7\") " pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.365733 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.372938 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-djjk4"] Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.443930 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvgbg\" (UniqueName: \"kubernetes.io/projected/bfbec1eb-c15f-48e8-bb4e-9e876d55a511-kube-api-access-tvgbg\") pod \"redhat-operators-djjk4\" (UID: \"bfbec1eb-c15f-48e8-bb4e-9e876d55a511\") " pod="openshift-marketplace/redhat-operators-djjk4" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.444075 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfbec1eb-c15f-48e8-bb4e-9e876d55a511-catalog-content\") pod \"redhat-operators-djjk4\" (UID: \"bfbec1eb-c15f-48e8-bb4e-9e876d55a511\") " pod="openshift-marketplace/redhat-operators-djjk4" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.444106 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfbec1eb-c15f-48e8-bb4e-9e876d55a511-utilities\") pod \"redhat-operators-djjk4\" (UID: \"bfbec1eb-c15f-48e8-bb4e-9e876d55a511\") " pod="openshift-marketplace/redhat-operators-djjk4" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.545837 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfbec1eb-c15f-48e8-bb4e-9e876d55a511-catalog-content\") pod \"redhat-operators-djjk4\" (UID: \"bfbec1eb-c15f-48e8-bb4e-9e876d55a511\") " pod="openshift-marketplace/redhat-operators-djjk4" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.545887 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfbec1eb-c15f-48e8-bb4e-9e876d55a511-utilities\") pod \"redhat-operators-djjk4\" (UID: \"bfbec1eb-c15f-48e8-bb4e-9e876d55a511\") " pod="openshift-marketplace/redhat-operators-djjk4" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.545916 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvgbg\" (UniqueName: \"kubernetes.io/projected/bfbec1eb-c15f-48e8-bb4e-9e876d55a511-kube-api-access-tvgbg\") pod \"redhat-operators-djjk4\" (UID: \"bfbec1eb-c15f-48e8-bb4e-9e876d55a511\") " pod="openshift-marketplace/redhat-operators-djjk4" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.546493 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfbec1eb-c15f-48e8-bb4e-9e876d55a511-catalog-content\") pod \"redhat-operators-djjk4\" (UID: \"bfbec1eb-c15f-48e8-bb4e-9e876d55a511\") " pod="openshift-marketplace/redhat-operators-djjk4" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.546891 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfbec1eb-c15f-48e8-bb4e-9e876d55a511-utilities\") pod \"redhat-operators-djjk4\" (UID: \"bfbec1eb-c15f-48e8-bb4e-9e876d55a511\") " pod="openshift-marketplace/redhat-operators-djjk4" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.566041 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvgbg\" (UniqueName: \"kubernetes.io/projected/bfbec1eb-c15f-48e8-bb4e-9e876d55a511-kube-api-access-tvgbg\") pod \"redhat-operators-djjk4\" (UID: \"bfbec1eb-c15f-48e8-bb4e-9e876d55a511\") " pod="openshift-marketplace/redhat-operators-djjk4" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.572331 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:13:13 crc kubenswrapper[4825]: I0219 00:13:13.701083 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-djjk4" Feb 19 00:13:14 crc kubenswrapper[4825]: I0219 00:13:14.013834 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjxd7"] Feb 19 00:13:14 crc kubenswrapper[4825]: W0219 00:13:14.014880 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf9e7a92_e527_4a28_a75c_fac2d38484d7.slice/crio-c22f488c58ee5e816d69693d8f1145e126eaec175d7c1eb5218704ba3df2509f WatchSource:0}: Error finding container c22f488c58ee5e816d69693d8f1145e126eaec175d7c1eb5218704ba3df2509f: Status 404 returned error can't find the container with id c22f488c58ee5e816d69693d8f1145e126eaec175d7c1eb5218704ba3df2509f Feb 19 00:13:14 crc kubenswrapper[4825]: I0219 00:13:14.109860 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-djjk4"] Feb 19 00:13:14 crc kubenswrapper[4825]: I0219 00:13:14.111161 4825 generic.go:334] "Generic (PLEG): container finished" podID="a8961fc7-6b71-4345-9d73-db95ac0b0627" containerID="8c34df86948ac286fcb5dc88ca4a9b92aab9257349a17187d68e54fba8125999" exitCode=0 Feb 19 00:13:14 crc kubenswrapper[4825]: I0219 00:13:14.111237 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5t5x" event={"ID":"a8961fc7-6b71-4345-9d73-db95ac0b0627","Type":"ContainerDied","Data":"8c34df86948ac286fcb5dc88ca4a9b92aab9257349a17187d68e54fba8125999"} Feb 19 00:13:14 crc kubenswrapper[4825]: I0219 00:13:14.112027 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjxd7" event={"ID":"af9e7a92-e527-4a28-a75c-fac2d38484d7","Type":"ContainerStarted","Data":"c22f488c58ee5e816d69693d8f1145e126eaec175d7c1eb5218704ba3df2509f"} Feb 19 00:13:14 crc kubenswrapper[4825]: I0219 00:13:14.113914 4825 generic.go:334] "Generic (PLEG): container finished" podID="9ec19de0-0a04-435c-b93d-ed4231a4cce4" containerID="2212b8b141bcd716a3ebb1beb40fcd2c57fd355897144a57165b66704e067b02" exitCode=0 Feb 19 00:13:14 crc kubenswrapper[4825]: I0219 00:13:14.113950 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jp9cc" event={"ID":"9ec19de0-0a04-435c-b93d-ed4231a4cce4","Type":"ContainerDied","Data":"2212b8b141bcd716a3ebb1beb40fcd2c57fd355897144a57165b66704e067b02"} Feb 19 00:13:15 crc kubenswrapper[4825]: I0219 00:13:15.121295 4825 generic.go:334] "Generic (PLEG): container finished" podID="af9e7a92-e527-4a28-a75c-fac2d38484d7" containerID="db3640d2a86673752bf48450cb5630eee9ab2214263b66dbd578eef9e4f6e11d" exitCode=0 Feb 19 00:13:15 crc kubenswrapper[4825]: I0219 00:13:15.121409 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjxd7" event={"ID":"af9e7a92-e527-4a28-a75c-fac2d38484d7","Type":"ContainerDied","Data":"db3640d2a86673752bf48450cb5630eee9ab2214263b66dbd578eef9e4f6e11d"} Feb 19 00:13:15 crc kubenswrapper[4825]: I0219 00:13:15.125885 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jp9cc" event={"ID":"9ec19de0-0a04-435c-b93d-ed4231a4cce4","Type":"ContainerStarted","Data":"ff6c31d6830575f0f7d50092c678bf481bbafde15c5d048a65592b46d5eb5ef1"} Feb 19 00:13:15 crc kubenswrapper[4825]: I0219 00:13:15.128782 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djjk4" event={"ID":"bfbec1eb-c15f-48e8-bb4e-9e876d55a511","Type":"ContainerDied","Data":"23b677edcff875827c69d7819977262c773e42afc65a35fb9b96838f7b2abc9b"} Feb 19 00:13:15 crc kubenswrapper[4825]: I0219 00:13:15.128753 4825 generic.go:334] "Generic (PLEG): container finished" podID="bfbec1eb-c15f-48e8-bb4e-9e876d55a511" containerID="23b677edcff875827c69d7819977262c773e42afc65a35fb9b96838f7b2abc9b" exitCode=0 Feb 19 00:13:15 crc kubenswrapper[4825]: I0219 00:13:15.128875 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djjk4" event={"ID":"bfbec1eb-c15f-48e8-bb4e-9e876d55a511","Type":"ContainerStarted","Data":"6db1cb36a88ba7d1318f90f557bec1b021c8ce580959508db7c677dc07066dfe"} Feb 19 00:13:15 crc kubenswrapper[4825]: I0219 00:13:15.135341 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5t5x" event={"ID":"a8961fc7-6b71-4345-9d73-db95ac0b0627","Type":"ContainerStarted","Data":"d7751d5ec3c5cfbb21facee4e499250efe97bf1dbae237580b51db6d75d42070"} Feb 19 00:13:15 crc kubenswrapper[4825]: I0219 00:13:15.190896 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d5t5x" podStartSLOduration=2.757165621 podStartE2EDuration="5.190879707s" podCreationTimestamp="2026-02-19 00:13:10 +0000 UTC" firstStartedPulling="2026-02-19 00:13:12.093658608 +0000 UTC m=+337.784624695" lastFinishedPulling="2026-02-19 00:13:14.527372724 +0000 UTC m=+340.218338781" observedRunningTime="2026-02-19 00:13:15.187164726 +0000 UTC m=+340.878130763" watchObservedRunningTime="2026-02-19 00:13:15.190879707 +0000 UTC m=+340.881845754" Feb 19 00:13:15 crc kubenswrapper[4825]: I0219 00:13:15.209204 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jp9cc" podStartSLOduration=2.78806865 podStartE2EDuration="5.209183699s" podCreationTimestamp="2026-02-19 00:13:10 +0000 UTC" firstStartedPulling="2026-02-19 00:13:12.090604398 +0000 UTC m=+337.781570445" lastFinishedPulling="2026-02-19 00:13:14.511719447 +0000 UTC m=+340.202685494" observedRunningTime="2026-02-19 00:13:15.205061171 +0000 UTC m=+340.896027218" watchObservedRunningTime="2026-02-19 00:13:15.209183699 +0000 UTC m=+340.900149746" Feb 19 00:13:16 crc kubenswrapper[4825]: I0219 00:13:16.143389 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjxd7" event={"ID":"af9e7a92-e527-4a28-a75c-fac2d38484d7","Type":"ContainerStarted","Data":"b8d050772ae770e0e35ada5bb1ab2f716c753bfef3a1ed3b55016c52dd3dc613"} Feb 19 00:13:16 crc kubenswrapper[4825]: I0219 00:13:16.145814 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djjk4" event={"ID":"bfbec1eb-c15f-48e8-bb4e-9e876d55a511","Type":"ContainerStarted","Data":"301fdf6b359d5871234369e9faca0f9f78db0b8fdeaae55c49daf4af57032934"} Feb 19 00:13:17 crc kubenswrapper[4825]: I0219 00:13:17.156091 4825 generic.go:334] "Generic (PLEG): container finished" podID="af9e7a92-e527-4a28-a75c-fac2d38484d7" containerID="b8d050772ae770e0e35ada5bb1ab2f716c753bfef3a1ed3b55016c52dd3dc613" exitCode=0 Feb 19 00:13:17 crc kubenswrapper[4825]: I0219 00:13:17.156171 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjxd7" event={"ID":"af9e7a92-e527-4a28-a75c-fac2d38484d7","Type":"ContainerDied","Data":"b8d050772ae770e0e35ada5bb1ab2f716c753bfef3a1ed3b55016c52dd3dc613"} Feb 19 00:13:17 crc kubenswrapper[4825]: I0219 00:13:17.158708 4825 generic.go:334] "Generic (PLEG): container finished" podID="bfbec1eb-c15f-48e8-bb4e-9e876d55a511" containerID="301fdf6b359d5871234369e9faca0f9f78db0b8fdeaae55c49daf4af57032934" exitCode=0 Feb 19 00:13:17 crc kubenswrapper[4825]: I0219 00:13:17.158743 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djjk4" event={"ID":"bfbec1eb-c15f-48e8-bb4e-9e876d55a511","Type":"ContainerDied","Data":"301fdf6b359d5871234369e9faca0f9f78db0b8fdeaae55c49daf4af57032934"} Feb 19 00:13:18 crc kubenswrapper[4825]: I0219 00:13:18.168187 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djjk4" event={"ID":"bfbec1eb-c15f-48e8-bb4e-9e876d55a511","Type":"ContainerStarted","Data":"b290dbae067c628f01df8a249bbc1b3ad3899aacb056fc016360f6078f6a2f4a"} Feb 19 00:13:18 crc kubenswrapper[4825]: I0219 00:13:18.171993 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjxd7" event={"ID":"af9e7a92-e527-4a28-a75c-fac2d38484d7","Type":"ContainerStarted","Data":"ec0d8ae5f2780b01102259b0f5a52e6e6b00f903eb907b3504fa2842dc8b993a"} Feb 19 00:13:18 crc kubenswrapper[4825]: I0219 00:13:18.194933 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-djjk4" podStartSLOduration=2.777962731 podStartE2EDuration="5.194913372s" podCreationTimestamp="2026-02-19 00:13:13 +0000 UTC" firstStartedPulling="2026-02-19 00:13:15.129876591 +0000 UTC m=+340.820842638" lastFinishedPulling="2026-02-19 00:13:17.546827232 +0000 UTC m=+343.237793279" observedRunningTime="2026-02-19 00:13:18.189119746 +0000 UTC m=+343.880085813" watchObservedRunningTime="2026-02-19 00:13:18.194913372 +0000 UTC m=+343.885879429" Feb 19 00:13:21 crc kubenswrapper[4825]: I0219 00:13:21.144436 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jp9cc" Feb 19 00:13:21 crc kubenswrapper[4825]: I0219 00:13:21.145038 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jp9cc" Feb 19 00:13:21 crc kubenswrapper[4825]: I0219 00:13:21.199621 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jp9cc" Feb 19 00:13:21 crc kubenswrapper[4825]: I0219 00:13:21.220374 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bjxd7" podStartSLOduration=5.789149417 podStartE2EDuration="8.220353563s" podCreationTimestamp="2026-02-19 00:13:13 +0000 UTC" firstStartedPulling="2026-02-19 00:13:15.123868102 +0000 UTC m=+340.814834159" lastFinishedPulling="2026-02-19 00:13:17.555072258 +0000 UTC m=+343.246038305" observedRunningTime="2026-02-19 00:13:18.211361612 +0000 UTC m=+343.902327669" watchObservedRunningTime="2026-02-19 00:13:21.220353563 +0000 UTC m=+346.911319610" Feb 19 00:13:21 crc kubenswrapper[4825]: I0219 00:13:21.250613 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jp9cc" Feb 19 00:13:21 crc kubenswrapper[4825]: I0219 00:13:21.297129 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d5t5x" Feb 19 00:13:21 crc kubenswrapper[4825]: I0219 00:13:21.297181 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d5t5x" Feb 19 00:13:21 crc kubenswrapper[4825]: I0219 00:13:21.338356 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d5t5x" Feb 19 00:13:22 crc kubenswrapper[4825]: I0219 00:13:22.258301 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d5t5x" Feb 19 00:13:23 crc kubenswrapper[4825]: I0219 00:13:23.572992 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:13:23 crc kubenswrapper[4825]: I0219 00:13:23.574044 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:13:23 crc kubenswrapper[4825]: I0219 00:13:23.609830 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:13:23 crc kubenswrapper[4825]: I0219 00:13:23.701427 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-djjk4" Feb 19 00:13:23 crc kubenswrapper[4825]: I0219 00:13:23.701473 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-djjk4" Feb 19 00:13:23 crc kubenswrapper[4825]: I0219 00:13:23.752992 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-djjk4" Feb 19 00:13:24 crc kubenswrapper[4825]: I0219 00:13:24.244782 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:13:24 crc kubenswrapper[4825]: I0219 00:13:24.274372 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-djjk4" Feb 19 00:13:28 crc kubenswrapper[4825]: I0219 00:13:28.823481 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:13:28 crc kubenswrapper[4825]: I0219 00:13:28.826445 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:13:58 crc kubenswrapper[4825]: I0219 00:13:58.823818 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:13:58 crc kubenswrapper[4825]: I0219 00:13:58.824382 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.194544 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4mfhm"] Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.195413 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.210251 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4mfhm"] Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.302330 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rxkd\" (UniqueName: \"kubernetes.io/projected/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-kube-api-access-2rxkd\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.302386 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.302454 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-registry-certificates\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.302481 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-bound-sa-token\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.302521 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.302598 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-registry-tls\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.302633 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.302651 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-trusted-ca\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.326699 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.403330 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-registry-certificates\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.403720 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-bound-sa-token\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.403803 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.403882 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-registry-tls\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.403956 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.404020 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-trusted-ca\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.404096 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rxkd\" (UniqueName: \"kubernetes.io/projected/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-kube-api-access-2rxkd\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.404365 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-registry-certificates\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.404944 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.405412 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-trusted-ca\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.410133 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-registry-tls\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.420933 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.422237 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-bound-sa-token\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.422717 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rxkd\" (UniqueName: \"kubernetes.io/projected/82a20e10-1f00-4ce4-a842-ca08e63bd3ee-kube-api-access-2rxkd\") pod \"image-registry-66df7c8f76-4mfhm\" (UID: \"82a20e10-1f00-4ce4-a842-ca08e63bd3ee\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.555732 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:13:59 crc kubenswrapper[4825]: I0219 00:13:59.777739 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4mfhm"] Feb 19 00:14:00 crc kubenswrapper[4825]: I0219 00:14:00.405278 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" event={"ID":"82a20e10-1f00-4ce4-a842-ca08e63bd3ee","Type":"ContainerStarted","Data":"22a8b2571946739d1857fb82454d74b88a1e38431975ef2e6203054f690e3d9e"} Feb 19 00:14:00 crc kubenswrapper[4825]: I0219 00:14:00.405347 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" event={"ID":"82a20e10-1f00-4ce4-a842-ca08e63bd3ee","Type":"ContainerStarted","Data":"cdcb832e6c3f99377e7d385248dd6ee8f37ad51b31dcdf6294cc8e4396dc3f67"} Feb 19 00:14:00 crc kubenswrapper[4825]: I0219 00:14:00.406187 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:14:00 crc kubenswrapper[4825]: I0219 00:14:00.431461 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" podStartSLOduration=1.431439791 podStartE2EDuration="1.431439791s" podCreationTimestamp="2026-02-19 00:13:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:14:00.426356894 +0000 UTC m=+386.117322951" watchObservedRunningTime="2026-02-19 00:14:00.431439791 +0000 UTC m=+386.122405838" Feb 19 00:14:19 crc kubenswrapper[4825]: I0219 00:14:19.565803 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-4mfhm" Feb 19 00:14:19 crc kubenswrapper[4825]: I0219 00:14:19.656030 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rlctl"] Feb 19 00:14:28 crc kubenswrapper[4825]: I0219 00:14:28.823430 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:14:28 crc kubenswrapper[4825]: I0219 00:14:28.824789 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:14:28 crc kubenswrapper[4825]: I0219 00:14:28.824878 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:14:28 crc kubenswrapper[4825]: I0219 00:14:28.826162 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53baafc0a5b9c1c224604d4119e0e92aee7172ade69be62b2ef5640b6546ae02"} pod="openshift-machine-config-operator/machine-config-daemon-tggq9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:14:28 crc kubenswrapper[4825]: I0219 00:14:28.826279 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" containerID="cri-o://53baafc0a5b9c1c224604d4119e0e92aee7172ade69be62b2ef5640b6546ae02" gracePeriod=600 Feb 19 00:14:29 crc kubenswrapper[4825]: I0219 00:14:29.593283 4825 generic.go:334] "Generic (PLEG): container finished" podID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerID="53baafc0a5b9c1c224604d4119e0e92aee7172ade69be62b2ef5640b6546ae02" exitCode=0 Feb 19 00:14:29 crc kubenswrapper[4825]: I0219 00:14:29.593385 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerDied","Data":"53baafc0a5b9c1c224604d4119e0e92aee7172ade69be62b2ef5640b6546ae02"} Feb 19 00:14:29 crc kubenswrapper[4825]: I0219 00:14:29.593632 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerStarted","Data":"3977a1de60d33698055567352ee370d0b71d26733409f8b00d78c8c89781f897"} Feb 19 00:14:29 crc kubenswrapper[4825]: I0219 00:14:29.593655 4825 scope.go:117] "RemoveContainer" containerID="e296002b2e0a47b132ee41abaebf9e72c3d1b6e2278117b99d5b8b095a271a5e" Feb 19 00:14:35 crc kubenswrapper[4825]: I0219 00:14:35.250256 4825 scope.go:117] "RemoveContainer" containerID="16debf5c838fb2beb7cd728394a99bec788d3aeff39b480a7fc0b74f2e1b8af9" Feb 19 00:14:35 crc kubenswrapper[4825]: I0219 00:14:35.279286 4825 scope.go:117] "RemoveContainer" containerID="8df3d3d89fc5c02e96cfaa4b0916763fccd44e8ef66d934e13def8d27407ca25" Feb 19 00:14:35 crc kubenswrapper[4825]: I0219 00:14:35.304873 4825 scope.go:117] "RemoveContainer" containerID="93f0800705f1e667c88a374ec70016030853d3438f0ed403c98920d47f260091" Feb 19 00:14:35 crc kubenswrapper[4825]: I0219 00:14:35.333870 4825 scope.go:117] "RemoveContainer" containerID="6b47d2f23b4ddca16a6ea69364aa5a4de6b312cfe542aef049e223a465168711" Feb 19 00:14:35 crc kubenswrapper[4825]: I0219 00:14:35.348486 4825 scope.go:117] "RemoveContainer" containerID="8366e09acba4ac57ea18f9a918efd3999b01649630d79bd57d83dc21cb342346" Feb 19 00:14:35 crc kubenswrapper[4825]: I0219 00:14:35.364718 4825 scope.go:117] "RemoveContainer" containerID="17d2eb7c644d846190d98056cf707913227b22db4a221f11f5227705a30bcdd5" Feb 19 00:14:44 crc kubenswrapper[4825]: I0219 00:14:44.708282 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" podUID="43b75e6e-5b2d-4690-b500-20ad18a1e042" containerName="registry" containerID="cri-o://e52a3b385b7356dbedb87d9fa47a8b54917e37e27bcec2a4b4f4bffe8644d490" gracePeriod=30 Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.163079 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.230754 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-bound-sa-token\") pod \"43b75e6e-5b2d-4690-b500-20ad18a1e042\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.230816 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfsbk\" (UniqueName: \"kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-kube-api-access-mfsbk\") pod \"43b75e6e-5b2d-4690-b500-20ad18a1e042\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.230899 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43b75e6e-5b2d-4690-b500-20ad18a1e042-ca-trust-extracted\") pod \"43b75e6e-5b2d-4690-b500-20ad18a1e042\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.231152 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"43b75e6e-5b2d-4690-b500-20ad18a1e042\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.231216 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-registry-tls\") pod \"43b75e6e-5b2d-4690-b500-20ad18a1e042\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.231287 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43b75e6e-5b2d-4690-b500-20ad18a1e042-installation-pull-secrets\") pod \"43b75e6e-5b2d-4690-b500-20ad18a1e042\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.232120 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43b75e6e-5b2d-4690-b500-20ad18a1e042-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "43b75e6e-5b2d-4690-b500-20ad18a1e042" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.232312 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43b75e6e-5b2d-4690-b500-20ad18a1e042-trusted-ca\") pod \"43b75e6e-5b2d-4690-b500-20ad18a1e042\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.232397 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43b75e6e-5b2d-4690-b500-20ad18a1e042-registry-certificates\") pod \"43b75e6e-5b2d-4690-b500-20ad18a1e042\" (UID: \"43b75e6e-5b2d-4690-b500-20ad18a1e042\") " Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.233046 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43b75e6e-5b2d-4690-b500-20ad18a1e042-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "43b75e6e-5b2d-4690-b500-20ad18a1e042" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.233391 4825 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/43b75e6e-5b2d-4690-b500-20ad18a1e042-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.237375 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "43b75e6e-5b2d-4690-b500-20ad18a1e042" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.237581 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43b75e6e-5b2d-4690-b500-20ad18a1e042-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "43b75e6e-5b2d-4690-b500-20ad18a1e042" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.238631 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-kube-api-access-mfsbk" (OuterVolumeSpecName: "kube-api-access-mfsbk") pod "43b75e6e-5b2d-4690-b500-20ad18a1e042" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042"). InnerVolumeSpecName "kube-api-access-mfsbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.240455 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "43b75e6e-5b2d-4690-b500-20ad18a1e042" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.244292 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "43b75e6e-5b2d-4690-b500-20ad18a1e042" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.251231 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43b75e6e-5b2d-4690-b500-20ad18a1e042-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "43b75e6e-5b2d-4690-b500-20ad18a1e042" (UID: "43b75e6e-5b2d-4690-b500-20ad18a1e042"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.334965 4825 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.334996 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfsbk\" (UniqueName: \"kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-kube-api-access-mfsbk\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.335008 4825 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/43b75e6e-5b2d-4690-b500-20ad18a1e042-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.335017 4825 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/43b75e6e-5b2d-4690-b500-20ad18a1e042-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.335026 4825 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/43b75e6e-5b2d-4690-b500-20ad18a1e042-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.335034 4825 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/43b75e6e-5b2d-4690-b500-20ad18a1e042-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.714537 4825 generic.go:334] "Generic (PLEG): container finished" podID="43b75e6e-5b2d-4690-b500-20ad18a1e042" containerID="e52a3b385b7356dbedb87d9fa47a8b54917e37e27bcec2a4b4f4bffe8644d490" exitCode=0 Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.714580 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" event={"ID":"43b75e6e-5b2d-4690-b500-20ad18a1e042","Type":"ContainerDied","Data":"e52a3b385b7356dbedb87d9fa47a8b54917e37e27bcec2a4b4f4bffe8644d490"} Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.714616 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" event={"ID":"43b75e6e-5b2d-4690-b500-20ad18a1e042","Type":"ContainerDied","Data":"a519ee1f12c9bedad67b71edd3e97b08c7a2035053dc8d909b23b8b83578d8d6"} Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.714639 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rlctl" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.714647 4825 scope.go:117] "RemoveContainer" containerID="e52a3b385b7356dbedb87d9fa47a8b54917e37e27bcec2a4b4f4bffe8644d490" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.735451 4825 scope.go:117] "RemoveContainer" containerID="e52a3b385b7356dbedb87d9fa47a8b54917e37e27bcec2a4b4f4bffe8644d490" Feb 19 00:14:45 crc kubenswrapper[4825]: E0219 00:14:45.735921 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e52a3b385b7356dbedb87d9fa47a8b54917e37e27bcec2a4b4f4bffe8644d490\": container with ID starting with e52a3b385b7356dbedb87d9fa47a8b54917e37e27bcec2a4b4f4bffe8644d490 not found: ID does not exist" containerID="e52a3b385b7356dbedb87d9fa47a8b54917e37e27bcec2a4b4f4bffe8644d490" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.735967 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52a3b385b7356dbedb87d9fa47a8b54917e37e27bcec2a4b4f4bffe8644d490"} err="failed to get container status \"e52a3b385b7356dbedb87d9fa47a8b54917e37e27bcec2a4b4f4bffe8644d490\": rpc error: code = NotFound desc = could not find container \"e52a3b385b7356dbedb87d9fa47a8b54917e37e27bcec2a4b4f4bffe8644d490\": container with ID starting with e52a3b385b7356dbedb87d9fa47a8b54917e37e27bcec2a4b4f4bffe8644d490 not found: ID does not exist" Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.759388 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rlctl"] Feb 19 00:14:45 crc kubenswrapper[4825]: I0219 00:14:45.769631 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rlctl"] Feb 19 00:14:47 crc kubenswrapper[4825]: I0219 00:14:47.073453 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43b75e6e-5b2d-4690-b500-20ad18a1e042" path="/var/lib/kubelet/pods/43b75e6e-5b2d-4690-b500-20ad18a1e042/volumes" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.178025 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s"] Feb 19 00:15:00 crc kubenswrapper[4825]: E0219 00:15:00.180108 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b75e6e-5b2d-4690-b500-20ad18a1e042" containerName="registry" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.180188 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b75e6e-5b2d-4690-b500-20ad18a1e042" containerName="registry" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.180332 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="43b75e6e-5b2d-4690-b500-20ad18a1e042" containerName="registry" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.180769 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.183913 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.183924 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.185100 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s"] Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.271114 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-config-volume\") pod \"collect-profiles-29524335-9fq2s\" (UID: \"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.271567 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j7wt\" (UniqueName: \"kubernetes.io/projected/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-kube-api-access-6j7wt\") pod \"collect-profiles-29524335-9fq2s\" (UID: \"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.272386 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-secret-volume\") pod \"collect-profiles-29524335-9fq2s\" (UID: \"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.373486 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-secret-volume\") pod \"collect-profiles-29524335-9fq2s\" (UID: \"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.373836 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-config-volume\") pod \"collect-profiles-29524335-9fq2s\" (UID: \"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.373876 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j7wt\" (UniqueName: \"kubernetes.io/projected/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-kube-api-access-6j7wt\") pod \"collect-profiles-29524335-9fq2s\" (UID: \"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.374936 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-config-volume\") pod \"collect-profiles-29524335-9fq2s\" (UID: \"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.386591 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-secret-volume\") pod \"collect-profiles-29524335-9fq2s\" (UID: \"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.388940 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j7wt\" (UniqueName: \"kubernetes.io/projected/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-kube-api-access-6j7wt\") pod \"collect-profiles-29524335-9fq2s\" (UID: \"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.506801 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.696771 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s"] Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.845822 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" event={"ID":"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a","Type":"ContainerStarted","Data":"2d91794b22672e9c1f62d1607f948e9330ac108b6a75aa5818530073b4d21651"} Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.845871 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" event={"ID":"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a","Type":"ContainerStarted","Data":"7fa58dba93dd2da489380bc9ba2aa7e28051a33f095c789276bec90cef116635"} Feb 19 00:15:00 crc kubenswrapper[4825]: I0219 00:15:00.858677 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" podStartSLOduration=0.858658143 podStartE2EDuration="858.658143ms" podCreationTimestamp="2026-02-19 00:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:15:00.856967658 +0000 UTC m=+446.547933705" watchObservedRunningTime="2026-02-19 00:15:00.858658143 +0000 UTC m=+446.549624190" Feb 19 00:15:01 crc kubenswrapper[4825]: I0219 00:15:01.853046 4825 generic.go:334] "Generic (PLEG): container finished" podID="cfd38bd1-acca-4030-bd8e-f7fa5d41df3a" containerID="2d91794b22672e9c1f62d1607f948e9330ac108b6a75aa5818530073b4d21651" exitCode=0 Feb 19 00:15:01 crc kubenswrapper[4825]: I0219 00:15:01.853154 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" event={"ID":"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a","Type":"ContainerDied","Data":"2d91794b22672e9c1f62d1607f948e9330ac108b6a75aa5818530073b4d21651"} Feb 19 00:15:03 crc kubenswrapper[4825]: I0219 00:15:03.129094 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" Feb 19 00:15:03 crc kubenswrapper[4825]: I0219 00:15:03.209976 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j7wt\" (UniqueName: \"kubernetes.io/projected/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-kube-api-access-6j7wt\") pod \"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a\" (UID: \"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a\") " Feb 19 00:15:03 crc kubenswrapper[4825]: I0219 00:15:03.210127 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-config-volume\") pod \"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a\" (UID: \"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a\") " Feb 19 00:15:03 crc kubenswrapper[4825]: I0219 00:15:03.210224 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-secret-volume\") pod \"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a\" (UID: \"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a\") " Feb 19 00:15:03 crc kubenswrapper[4825]: I0219 00:15:03.212010 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-config-volume" (OuterVolumeSpecName: "config-volume") pod "cfd38bd1-acca-4030-bd8e-f7fa5d41df3a" (UID: "cfd38bd1-acca-4030-bd8e-f7fa5d41df3a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:15:03 crc kubenswrapper[4825]: I0219 00:15:03.219839 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cfd38bd1-acca-4030-bd8e-f7fa5d41df3a" (UID: "cfd38bd1-acca-4030-bd8e-f7fa5d41df3a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:15:03 crc kubenswrapper[4825]: I0219 00:15:03.220044 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-kube-api-access-6j7wt" (OuterVolumeSpecName: "kube-api-access-6j7wt") pod "cfd38bd1-acca-4030-bd8e-f7fa5d41df3a" (UID: "cfd38bd1-acca-4030-bd8e-f7fa5d41df3a"). InnerVolumeSpecName "kube-api-access-6j7wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:15:03 crc kubenswrapper[4825]: I0219 00:15:03.312159 4825 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:03 crc kubenswrapper[4825]: I0219 00:15:03.312231 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j7wt\" (UniqueName: \"kubernetes.io/projected/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-kube-api-access-6j7wt\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:03 crc kubenswrapper[4825]: I0219 00:15:03.312250 4825 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfd38bd1-acca-4030-bd8e-f7fa5d41df3a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:03 crc kubenswrapper[4825]: I0219 00:15:03.868907 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" event={"ID":"cfd38bd1-acca-4030-bd8e-f7fa5d41df3a","Type":"ContainerDied","Data":"7fa58dba93dd2da489380bc9ba2aa7e28051a33f095c789276bec90cef116635"} Feb 19 00:15:03 crc kubenswrapper[4825]: I0219 00:15:03.868946 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fa58dba93dd2da489380bc9ba2aa7e28051a33f095c789276bec90cef116635" Feb 19 00:15:03 crc kubenswrapper[4825]: I0219 00:15:03.868943 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-9fq2s" Feb 19 00:16:58 crc kubenswrapper[4825]: I0219 00:16:58.822994 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:16:58 crc kubenswrapper[4825]: I0219 00:16:58.825205 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:17:11 crc kubenswrapper[4825]: I0219 00:17:11.927557 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bdpln"] Feb 19 00:17:11 crc kubenswrapper[4825]: I0219 00:17:11.933079 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovn-controller" containerID="cri-o://5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5" gracePeriod=30 Feb 19 00:17:11 crc kubenswrapper[4825]: I0219 00:17:11.933182 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="nbdb" containerID="cri-o://fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a" gracePeriod=30 Feb 19 00:17:11 crc kubenswrapper[4825]: I0219 00:17:11.933260 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="kube-rbac-proxy-node" containerID="cri-o://8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e" gracePeriod=30 Feb 19 00:17:11 crc kubenswrapper[4825]: I0219 00:17:11.933313 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="northd" containerID="cri-o://13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc" gracePeriod=30 Feb 19 00:17:11 crc kubenswrapper[4825]: I0219 00:17:11.933315 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovn-acl-logging" containerID="cri-o://13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a" gracePeriod=30 Feb 19 00:17:11 crc kubenswrapper[4825]: I0219 00:17:11.933557 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="sbdb" containerID="cri-o://16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705" gracePeriod=30 Feb 19 00:17:11 crc kubenswrapper[4825]: I0219 00:17:11.933257 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2" gracePeriod=30 Feb 19 00:17:11 crc kubenswrapper[4825]: I0219 00:17:11.981085 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovnkube-controller" containerID="cri-o://f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11" gracePeriod=30 Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.274926 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovnkube-controller/3.log" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.277686 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovn-acl-logging/0.log" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.278122 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovn-controller/0.log" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.278560 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344299 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t84jx"] Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.344530 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344544 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.344555 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="kubecfg-setup" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344561 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="kubecfg-setup" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.344568 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovnkube-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344574 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovnkube-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.344585 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="northd" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344591 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="northd" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.344599 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd38bd1-acca-4030-bd8e-f7fa5d41df3a" containerName="collect-profiles" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344605 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd38bd1-acca-4030-bd8e-f7fa5d41df3a" containerName="collect-profiles" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.344615 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovn-acl-logging" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344621 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovn-acl-logging" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.344629 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="sbdb" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344635 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="sbdb" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.344643 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovnkube-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344650 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovnkube-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.344658 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovnkube-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344665 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovnkube-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.344673 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="kube-rbac-proxy-node" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344679 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="kube-rbac-proxy-node" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.344689 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="nbdb" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344695 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="nbdb" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.344701 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovn-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344707 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovn-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.344716 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovnkube-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344721 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovnkube-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344815 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344826 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovn-acl-logging" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344832 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="nbdb" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344840 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="northd" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344848 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovnkube-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344856 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovnkube-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344862 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="sbdb" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344868 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="kube-rbac-proxy-node" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344875 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovnkube-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344883 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfd38bd1-acca-4030-bd8e-f7fa5d41df3a" containerName="collect-profiles" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344890 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovn-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.344969 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovnkube-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.344977 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovnkube-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.345062 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovnkube-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.345073 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerName="ovnkube-controller" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.346453 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400218 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-systemd-units\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400314 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovnkube-script-lib\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400350 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-log-socket\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400375 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-cni-bin\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400364 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400413 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-node-log\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400485 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-node-log" (OuterVolumeSpecName: "node-log") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400594 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-env-overrides\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400634 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400701 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-ovn\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400680 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-log-socket" (OuterVolumeSpecName: "log-socket") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400736 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400755 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-var-lib-openvswitch\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400751 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400781 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400842 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovn-node-metrics-cert\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400878 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-openvswitch\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400780 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400916 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovnkube-config\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400938 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.400996 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc4zd\" (UniqueName: \"kubernetes.io/projected/0c24ef0e-b402-4585-a79a-6b98b9896f5a-kube-api-access-sc4zd\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401031 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-run-netns\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401059 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-cni-netd\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401093 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-run-ovn-kubernetes\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401121 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-kubelet\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401160 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-slash\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401167 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401184 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401196 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-etc-openvswitch\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401214 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401226 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401242 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-systemd\") pod \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\" (UID: \"0c24ef0e-b402-4585-a79a-6b98b9896f5a\") " Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401256 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401259 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-slash" (OuterVolumeSpecName: "host-slash") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401277 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401457 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401464 4825 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401498 4825 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401535 4825 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401551 4825 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401564 4825 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401575 4825 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401586 4825 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401588 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401601 4825 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401616 4825 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401630 4825 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401642 4825 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401654 4825 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401666 4825 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401679 4825 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.401691 4825 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.407660 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c24ef0e-b402-4585-a79a-6b98b9896f5a-kube-api-access-sc4zd" (OuterVolumeSpecName: "kube-api-access-sc4zd") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "kube-api-access-sc4zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.407792 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.414708 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0c24ef0e-b402-4585-a79a-6b98b9896f5a" (UID: "0c24ef0e-b402-4585-a79a-6b98b9896f5a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.503957 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4frs\" (UniqueName: \"kubernetes.io/projected/4e37714c-5cac-4d61-b4f0-9a6287b9e833-kube-api-access-j4frs\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.504203 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-slash\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.504274 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.504383 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-cni-bin\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.504421 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4e37714c-5cac-4d61-b4f0-9a6287b9e833-env-overrides\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.504469 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-log-socket\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.504573 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-node-log\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.504698 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-etc-openvswitch\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.504756 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-run-systemd\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.504804 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-systemd-units\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.504845 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4e37714c-5cac-4d61-b4f0-9a6287b9e833-ovn-node-metrics-cert\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.504923 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-kubelet\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.505003 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4e37714c-5cac-4d61-b4f0-9a6287b9e833-ovnkube-config\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.505149 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-run-netns\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.505262 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-var-lib-openvswitch\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.505341 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-run-openvswitch\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.505391 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4e37714c-5cac-4d61-b4f0-9a6287b9e833-ovnkube-script-lib\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.505446 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-run-ovn-kubernetes\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.505496 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-cni-netd\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.505670 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-run-ovn\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.505829 4825 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.505878 4825 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c24ef0e-b402-4585-a79a-6b98b9896f5a-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.505912 4825 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.505944 4825 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c24ef0e-b402-4585-a79a-6b98b9896f5a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.505972 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc4zd\" (UniqueName: \"kubernetes.io/projected/0c24ef0e-b402-4585-a79a-6b98b9896f5a-kube-api-access-sc4zd\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.513103 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovnkube-controller/3.log" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.515655 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovn-acl-logging/0.log" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.516152 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bdpln_0c24ef0e-b402-4585-a79a-6b98b9896f5a/ovn-controller/0.log" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.516784 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerID="f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11" exitCode=0 Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.516814 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerID="16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705" exitCode=0 Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.516871 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerID="fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a" exitCode=0 Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.516882 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerID="13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc" exitCode=0 Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.516891 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerID="4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2" exitCode=0 Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.516899 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerID="8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e" exitCode=0 Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.516905 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerID="13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a" exitCode=143 Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.516912 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" containerID="5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5" exitCode=143 Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.516921 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.516841 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerDied","Data":"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517098 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerDied","Data":"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517123 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerDied","Data":"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517137 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerDied","Data":"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517151 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerDied","Data":"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517163 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerDied","Data":"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517177 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517192 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517203 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517214 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517222 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517231 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517238 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517246 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517252 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517261 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerDied","Data":"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517274 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517282 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517289 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517296 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517302 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517310 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517154 4825 scope.go:117] "RemoveContainer" containerID="f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517318 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517680 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517713 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517728 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517766 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerDied","Data":"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517810 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517828 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517840 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517853 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517867 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517878 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517890 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517901 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517913 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517924 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517941 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bdpln" event={"ID":"0c24ef0e-b402-4585-a79a-6b98b9896f5a","Type":"ContainerDied","Data":"4f065c9b79301d18b2d8b03bcf6c82fe34e626bb5200be38e719f5a91c57e5f1"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517958 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517975 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517987 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.517998 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.518010 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.518021 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.518031 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.518043 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.518055 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.518066 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.522893 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfx7x_2daa6777-c1b1-4fae-9c14-cfe10867288a/kube-multus/2.log" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.524405 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfx7x_2daa6777-c1b1-4fae-9c14-cfe10867288a/kube-multus/1.log" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.524446 4825 generic.go:334] "Generic (PLEG): container finished" podID="2daa6777-c1b1-4fae-9c14-cfe10867288a" containerID="e59b570df152a7fa6610b67dc946a6c9ad47eb9cb82e546c6406b9a5982d6f99" exitCode=2 Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.524474 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zfx7x" event={"ID":"2daa6777-c1b1-4fae-9c14-cfe10867288a","Type":"ContainerDied","Data":"e59b570df152a7fa6610b67dc946a6c9ad47eb9cb82e546c6406b9a5982d6f99"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.524492 4825 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e9505df7615b5027220ed25ee309b0a066503c64d3f8f45ef3fc23de1af2ac4"} Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.524871 4825 scope.go:117] "RemoveContainer" containerID="e59b570df152a7fa6610b67dc946a6c9ad47eb9cb82e546c6406b9a5982d6f99" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.525174 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zfx7x_openshift-multus(2daa6777-c1b1-4fae-9c14-cfe10867288a)\"" pod="openshift-multus/multus-zfx7x" podUID="2daa6777-c1b1-4fae-9c14-cfe10867288a" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.539007 4825 scope.go:117] "RemoveContainer" containerID="28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.575236 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bdpln"] Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.578275 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bdpln"] Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.581319 4825 scope.go:117] "RemoveContainer" containerID="16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.600532 4825 scope.go:117] "RemoveContainer" containerID="fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.606437 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4frs\" (UniqueName: \"kubernetes.io/projected/4e37714c-5cac-4d61-b4f0-9a6287b9e833-kube-api-access-j4frs\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.606476 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-slash\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.606515 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.606537 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-cni-bin\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.606555 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4e37714c-5cac-4d61-b4f0-9a6287b9e833-env-overrides\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.606571 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-log-socket\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.606624 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-slash\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.606724 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-log-socket\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.606774 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-cni-bin\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.606875 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.607734 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4e37714c-5cac-4d61-b4f0-9a6287b9e833-env-overrides\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.607809 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-node-log\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.607839 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-etc-openvswitch\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.607924 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-run-systemd\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.607928 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-etc-openvswitch\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.607948 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-systemd-units\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.607995 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-run-systemd\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.607888 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-node-log\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.607985 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-systemd-units\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.608040 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4e37714c-5cac-4d61-b4f0-9a6287b9e833-ovn-node-metrics-cert\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.608278 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-kubelet\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.608299 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-kubelet\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.609653 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4e37714c-5cac-4d61-b4f0-9a6287b9e833-ovnkube-config\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.609896 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4e37714c-5cac-4d61-b4f0-9a6287b9e833-ovnkube-config\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.609926 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-run-netns\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.609975 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-run-netns\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.609992 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-var-lib-openvswitch\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.610065 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-run-openvswitch\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.610101 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4e37714c-5cac-4d61-b4f0-9a6287b9e833-ovnkube-script-lib\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.610147 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-run-openvswitch\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.610162 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-var-lib-openvswitch\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.610215 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-run-ovn-kubernetes\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.610347 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-cni-netd\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.610384 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-run-ovn-kubernetes\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.610433 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-host-cni-netd\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.611652 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-run-ovn\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.611751 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4e37714c-5cac-4d61-b4f0-9a6287b9e833-run-ovn\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.611925 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4e37714c-5cac-4d61-b4f0-9a6287b9e833-ovnkube-script-lib\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.619275 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4e37714c-5cac-4d61-b4f0-9a6287b9e833-ovn-node-metrics-cert\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.623333 4825 scope.go:117] "RemoveContainer" containerID="13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.629678 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4frs\" (UniqueName: \"kubernetes.io/projected/4e37714c-5cac-4d61-b4f0-9a6287b9e833-kube-api-access-j4frs\") pod \"ovnkube-node-t84jx\" (UID: \"4e37714c-5cac-4d61-b4f0-9a6287b9e833\") " pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.643690 4825 scope.go:117] "RemoveContainer" containerID="4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.658636 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.666842 4825 scope.go:117] "RemoveContainer" containerID="8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.690211 4825 scope.go:117] "RemoveContainer" containerID="13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.711283 4825 scope.go:117] "RemoveContainer" containerID="5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.736714 4825 scope.go:117] "RemoveContainer" containerID="1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.760391 4825 scope.go:117] "RemoveContainer" containerID="f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.761036 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11\": container with ID starting with f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11 not found: ID does not exist" containerID="f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.761124 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11"} err="failed to get container status \"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11\": rpc error: code = NotFound desc = could not find container \"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11\": container with ID starting with f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.761165 4825 scope.go:117] "RemoveContainer" containerID="28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.761757 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\": container with ID starting with 28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f not found: ID does not exist" containerID="28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.761805 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f"} err="failed to get container status \"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\": rpc error: code = NotFound desc = could not find container \"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\": container with ID starting with 28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.761842 4825 scope.go:117] "RemoveContainer" containerID="16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.762230 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\": container with ID starting with 16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705 not found: ID does not exist" containerID="16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.762279 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705"} err="failed to get container status \"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\": rpc error: code = NotFound desc = could not find container \"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\": container with ID starting with 16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.762303 4825 scope.go:117] "RemoveContainer" containerID="fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.763046 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\": container with ID starting with fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a not found: ID does not exist" containerID="fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.763096 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a"} err="failed to get container status \"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\": rpc error: code = NotFound desc = could not find container \"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\": container with ID starting with fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.763120 4825 scope.go:117] "RemoveContainer" containerID="13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.763573 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\": container with ID starting with 13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc not found: ID does not exist" containerID="13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.763626 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc"} err="failed to get container status \"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\": rpc error: code = NotFound desc = could not find container \"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\": container with ID starting with 13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.763653 4825 scope.go:117] "RemoveContainer" containerID="4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.764028 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\": container with ID starting with 4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2 not found: ID does not exist" containerID="4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.764068 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2"} err="failed to get container status \"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\": rpc error: code = NotFound desc = could not find container \"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\": container with ID starting with 4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.764114 4825 scope.go:117] "RemoveContainer" containerID="8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.764550 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\": container with ID starting with 8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e not found: ID does not exist" containerID="8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.764605 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e"} err="failed to get container status \"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\": rpc error: code = NotFound desc = could not find container \"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\": container with ID starting with 8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.764634 4825 scope.go:117] "RemoveContainer" containerID="13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.765045 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\": container with ID starting with 13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a not found: ID does not exist" containerID="13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.765081 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a"} err="failed to get container status \"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\": rpc error: code = NotFound desc = could not find container \"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\": container with ID starting with 13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.765104 4825 scope.go:117] "RemoveContainer" containerID="5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.765419 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\": container with ID starting with 5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5 not found: ID does not exist" containerID="5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.765471 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5"} err="failed to get container status \"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\": rpc error: code = NotFound desc = could not find container \"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\": container with ID starting with 5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.765495 4825 scope.go:117] "RemoveContainer" containerID="1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78" Feb 19 00:17:12 crc kubenswrapper[4825]: E0219 00:17:12.765781 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\": container with ID starting with 1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78 not found: ID does not exist" containerID="1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.765816 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78"} err="failed to get container status \"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\": rpc error: code = NotFound desc = could not find container \"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\": container with ID starting with 1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.765836 4825 scope.go:117] "RemoveContainer" containerID="f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.766294 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11"} err="failed to get container status \"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11\": rpc error: code = NotFound desc = could not find container \"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11\": container with ID starting with f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.766324 4825 scope.go:117] "RemoveContainer" containerID="28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.766963 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f"} err="failed to get container status \"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\": rpc error: code = NotFound desc = could not find container \"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\": container with ID starting with 28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.766988 4825 scope.go:117] "RemoveContainer" containerID="16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.767411 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705"} err="failed to get container status \"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\": rpc error: code = NotFound desc = could not find container \"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\": container with ID starting with 16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.767439 4825 scope.go:117] "RemoveContainer" containerID="fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.767770 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a"} err="failed to get container status \"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\": rpc error: code = NotFound desc = could not find container \"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\": container with ID starting with fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.767795 4825 scope.go:117] "RemoveContainer" containerID="13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.768272 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc"} err="failed to get container status \"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\": rpc error: code = NotFound desc = could not find container \"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\": container with ID starting with 13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.768309 4825 scope.go:117] "RemoveContainer" containerID="4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.768865 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2"} err="failed to get container status \"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\": rpc error: code = NotFound desc = could not find container \"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\": container with ID starting with 4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.768895 4825 scope.go:117] "RemoveContainer" containerID="8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.769868 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e"} err="failed to get container status \"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\": rpc error: code = NotFound desc = could not find container \"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\": container with ID starting with 8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.769906 4825 scope.go:117] "RemoveContainer" containerID="13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.770938 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a"} err="failed to get container status \"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\": rpc error: code = NotFound desc = could not find container \"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\": container with ID starting with 13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.770975 4825 scope.go:117] "RemoveContainer" containerID="5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.771333 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5"} err="failed to get container status \"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\": rpc error: code = NotFound desc = could not find container \"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\": container with ID starting with 5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.771370 4825 scope.go:117] "RemoveContainer" containerID="1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.771730 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78"} err="failed to get container status \"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\": rpc error: code = NotFound desc = could not find container \"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\": container with ID starting with 1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.771760 4825 scope.go:117] "RemoveContainer" containerID="f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.772450 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11"} err="failed to get container status \"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11\": rpc error: code = NotFound desc = could not find container \"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11\": container with ID starting with f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.772482 4825 scope.go:117] "RemoveContainer" containerID="28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.773010 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f"} err="failed to get container status \"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\": rpc error: code = NotFound desc = could not find container \"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\": container with ID starting with 28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.773080 4825 scope.go:117] "RemoveContainer" containerID="16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.773480 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705"} err="failed to get container status \"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\": rpc error: code = NotFound desc = could not find container \"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\": container with ID starting with 16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.773532 4825 scope.go:117] "RemoveContainer" containerID="fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.773952 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a"} err="failed to get container status \"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\": rpc error: code = NotFound desc = could not find container \"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\": container with ID starting with fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.773988 4825 scope.go:117] "RemoveContainer" containerID="13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.774253 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc"} err="failed to get container status \"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\": rpc error: code = NotFound desc = could not find container \"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\": container with ID starting with 13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.774280 4825 scope.go:117] "RemoveContainer" containerID="4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.774732 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2"} err="failed to get container status \"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\": rpc error: code = NotFound desc = could not find container \"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\": container with ID starting with 4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.774765 4825 scope.go:117] "RemoveContainer" containerID="8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.775148 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e"} err="failed to get container status \"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\": rpc error: code = NotFound desc = could not find container \"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\": container with ID starting with 8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.775177 4825 scope.go:117] "RemoveContainer" containerID="13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.775520 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a"} err="failed to get container status \"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\": rpc error: code = NotFound desc = could not find container \"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\": container with ID starting with 13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.775548 4825 scope.go:117] "RemoveContainer" containerID="5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.775903 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5"} err="failed to get container status \"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\": rpc error: code = NotFound desc = could not find container \"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\": container with ID starting with 5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.775930 4825 scope.go:117] "RemoveContainer" containerID="1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.776245 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78"} err="failed to get container status \"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\": rpc error: code = NotFound desc = could not find container \"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\": container with ID starting with 1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.776291 4825 scope.go:117] "RemoveContainer" containerID="f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.776910 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11"} err="failed to get container status \"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11\": rpc error: code = NotFound desc = could not find container \"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11\": container with ID starting with f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.776949 4825 scope.go:117] "RemoveContainer" containerID="28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.777224 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f"} err="failed to get container status \"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\": rpc error: code = NotFound desc = could not find container \"28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f\": container with ID starting with 28841dccb3f4be34de5ae0de553c054129c30b350a9bf284d2cf7867b621e83f not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.777255 4825 scope.go:117] "RemoveContainer" containerID="16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.777741 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705"} err="failed to get container status \"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\": rpc error: code = NotFound desc = could not find container \"16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705\": container with ID starting with 16ecb94bdfcc663db2c61ac1b04641aa5716871c2884207941f5f3b5c54c9705 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.777791 4825 scope.go:117] "RemoveContainer" containerID="fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.778238 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a"} err="failed to get container status \"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\": rpc error: code = NotFound desc = could not find container \"fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a\": container with ID starting with fa629cbac94423fc99995cbc0387730a6d71ee729dd57adcd3dd1da4483fb09a not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.778274 4825 scope.go:117] "RemoveContainer" containerID="13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.778605 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc"} err="failed to get container status \"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\": rpc error: code = NotFound desc = could not find container \"13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc\": container with ID starting with 13922f431339b1a6bed526ca2cd5c1d319293a0cc23b4b9c9b2a96d0fd15afbc not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.778649 4825 scope.go:117] "RemoveContainer" containerID="4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.778987 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2"} err="failed to get container status \"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\": rpc error: code = NotFound desc = could not find container \"4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2\": container with ID starting with 4695addb1caa979865c6a9a2d8d458958085c0a0fb6de064412863e3fb4af7d2 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.779014 4825 scope.go:117] "RemoveContainer" containerID="8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.779324 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e"} err="failed to get container status \"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\": rpc error: code = NotFound desc = could not find container \"8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e\": container with ID starting with 8d6caf8ea88a04c82a2b15a693748eac4a03d7684c407c4c7e78f78b1822676e not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.779362 4825 scope.go:117] "RemoveContainer" containerID="13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.779665 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a"} err="failed to get container status \"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\": rpc error: code = NotFound desc = could not find container \"13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a\": container with ID starting with 13ad19661bcf567779f2325787cfbc9855a07d66245439d34af3aaea5e9f8f8a not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.779689 4825 scope.go:117] "RemoveContainer" containerID="5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.780592 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5"} err="failed to get container status \"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\": rpc error: code = NotFound desc = could not find container \"5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5\": container with ID starting with 5e0775480aeacdb4a9e1c60c32add8dddfb44bae80b9322f94121cb126ca97a5 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.780636 4825 scope.go:117] "RemoveContainer" containerID="1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.780951 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78"} err="failed to get container status \"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\": rpc error: code = NotFound desc = could not find container \"1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78\": container with ID starting with 1acc49447aff3d276d771ba0b93e8994f44e0c338bbc83fd76465cc370784a78 not found: ID does not exist" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.780986 4825 scope.go:117] "RemoveContainer" containerID="f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11" Feb 19 00:17:12 crc kubenswrapper[4825]: I0219 00:17:12.781301 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11"} err="failed to get container status \"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11\": rpc error: code = NotFound desc = could not find container \"f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11\": container with ID starting with f258e5b1bea91cf3ceb126d18c0ebb5bc868c35d1014de9468830c293f9fad11 not found: ID does not exist" Feb 19 00:17:13 crc kubenswrapper[4825]: I0219 00:17:13.080065 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c24ef0e-b402-4585-a79a-6b98b9896f5a" path="/var/lib/kubelet/pods/0c24ef0e-b402-4585-a79a-6b98b9896f5a/volumes" Feb 19 00:17:13 crc kubenswrapper[4825]: I0219 00:17:13.535828 4825 generic.go:334] "Generic (PLEG): container finished" podID="4e37714c-5cac-4d61-b4f0-9a6287b9e833" containerID="c660faa92cc29c1fb9848eb6a1ccdbb547cf95e3004955ec03b0935098644f36" exitCode=0 Feb 19 00:17:13 crc kubenswrapper[4825]: I0219 00:17:13.535918 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" event={"ID":"4e37714c-5cac-4d61-b4f0-9a6287b9e833","Type":"ContainerDied","Data":"c660faa92cc29c1fb9848eb6a1ccdbb547cf95e3004955ec03b0935098644f36"} Feb 19 00:17:13 crc kubenswrapper[4825]: I0219 00:17:13.536304 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" event={"ID":"4e37714c-5cac-4d61-b4f0-9a6287b9e833","Type":"ContainerStarted","Data":"48357618b81d80ec4c93912b4b33b76cd2df95641d1a260e4da7031382371078"} Feb 19 00:17:14 crc kubenswrapper[4825]: I0219 00:17:14.552451 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" event={"ID":"4e37714c-5cac-4d61-b4f0-9a6287b9e833","Type":"ContainerStarted","Data":"f82244bfc3cdbeb440c728697458ce081161eb2438cc023e299196a8f6f61140"} Feb 19 00:17:14 crc kubenswrapper[4825]: I0219 00:17:14.553314 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" event={"ID":"4e37714c-5cac-4d61-b4f0-9a6287b9e833","Type":"ContainerStarted","Data":"26699991ac83e3a67df4f2d70885e5d2a5c09d426ea316b038af8768a4f3b000"} Feb 19 00:17:14 crc kubenswrapper[4825]: I0219 00:17:14.553332 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" event={"ID":"4e37714c-5cac-4d61-b4f0-9a6287b9e833","Type":"ContainerStarted","Data":"63668a12d9b7ad655d3c1b50dfc133009debdf7c742ee21576c740dcd1565ee2"} Feb 19 00:17:14 crc kubenswrapper[4825]: I0219 00:17:14.553344 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" event={"ID":"4e37714c-5cac-4d61-b4f0-9a6287b9e833","Type":"ContainerStarted","Data":"0323dc7c351f5f5ff421bafcd0cae60705786434b7cf95507259e40f616e4647"} Feb 19 00:17:14 crc kubenswrapper[4825]: I0219 00:17:14.553360 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" event={"ID":"4e37714c-5cac-4d61-b4f0-9a6287b9e833","Type":"ContainerStarted","Data":"050807f4477b7158a7ee3824992d68999c29aabf58590088925d5fa560b3d39f"} Feb 19 00:17:15 crc kubenswrapper[4825]: I0219 00:17:15.564187 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" event={"ID":"4e37714c-5cac-4d61-b4f0-9a6287b9e833","Type":"ContainerStarted","Data":"a0da47ba1660424c6f4002430315e95221486722da85cced1e7f3e1c142d479b"} Feb 19 00:17:17 crc kubenswrapper[4825]: I0219 00:17:17.586662 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" event={"ID":"4e37714c-5cac-4d61-b4f0-9a6287b9e833","Type":"ContainerStarted","Data":"57e8a7e321eb7b81c28de7fb7ea3395fd4a7adc669b2d10e44b0f9a47ad99e6d"} Feb 19 00:17:19 crc kubenswrapper[4825]: I0219 00:17:19.612949 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" event={"ID":"4e37714c-5cac-4d61-b4f0-9a6287b9e833","Type":"ContainerStarted","Data":"f9838a71668943deba2d7847e66e32e9aff8aaccbf58089755a17dc76b262efb"} Feb 19 00:17:19 crc kubenswrapper[4825]: I0219 00:17:19.613974 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:19 crc kubenswrapper[4825]: I0219 00:17:19.613992 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:19 crc kubenswrapper[4825]: I0219 00:17:19.614002 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:19 crc kubenswrapper[4825]: I0219 00:17:19.654075 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:19 crc kubenswrapper[4825]: I0219 00:17:19.659818 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" podStartSLOduration=7.659796343 podStartE2EDuration="7.659796343s" podCreationTimestamp="2026-02-19 00:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:17:19.656594697 +0000 UTC m=+585.347560754" watchObservedRunningTime="2026-02-19 00:17:19.659796343 +0000 UTC m=+585.350762390" Feb 19 00:17:19 crc kubenswrapper[4825]: I0219 00:17:19.671294 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:28 crc kubenswrapper[4825]: I0219 00:17:28.065736 4825 scope.go:117] "RemoveContainer" containerID="e59b570df152a7fa6610b67dc946a6c9ad47eb9cb82e546c6406b9a5982d6f99" Feb 19 00:17:28 crc kubenswrapper[4825]: E0219 00:17:28.067036 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-zfx7x_openshift-multus(2daa6777-c1b1-4fae-9c14-cfe10867288a)\"" pod="openshift-multus/multus-zfx7x" podUID="2daa6777-c1b1-4fae-9c14-cfe10867288a" Feb 19 00:17:28 crc kubenswrapper[4825]: I0219 00:17:28.824260 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:17:28 crc kubenswrapper[4825]: I0219 00:17:28.824948 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:17:35 crc kubenswrapper[4825]: I0219 00:17:35.432150 4825 scope.go:117] "RemoveContainer" containerID="1e9505df7615b5027220ed25ee309b0a066503c64d3f8f45ef3fc23de1af2ac4" Feb 19 00:17:35 crc kubenswrapper[4825]: I0219 00:17:35.744085 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfx7x_2daa6777-c1b1-4fae-9c14-cfe10867288a/kube-multus/2.log" Feb 19 00:17:42 crc kubenswrapper[4825]: I0219 00:17:42.066645 4825 scope.go:117] "RemoveContainer" containerID="e59b570df152a7fa6610b67dc946a6c9ad47eb9cb82e546c6406b9a5982d6f99" Feb 19 00:17:42 crc kubenswrapper[4825]: I0219 00:17:42.680655 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t84jx" Feb 19 00:17:42 crc kubenswrapper[4825]: I0219 00:17:42.792945 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zfx7x_2daa6777-c1b1-4fae-9c14-cfe10867288a/kube-multus/2.log" Feb 19 00:17:42 crc kubenswrapper[4825]: I0219 00:17:42.793008 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zfx7x" event={"ID":"2daa6777-c1b1-4fae-9c14-cfe10867288a","Type":"ContainerStarted","Data":"3aa6a3f27ea0c36bfc1a43492540c19418ae829340592dec41b519f96d3b5686"} Feb 19 00:17:58 crc kubenswrapper[4825]: I0219 00:17:58.824308 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:17:58 crc kubenswrapper[4825]: I0219 00:17:58.825009 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:17:58 crc kubenswrapper[4825]: I0219 00:17:58.825077 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:17:58 crc kubenswrapper[4825]: I0219 00:17:58.825913 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3977a1de60d33698055567352ee370d0b71d26733409f8b00d78c8c89781f897"} pod="openshift-machine-config-operator/machine-config-daemon-tggq9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:17:58 crc kubenswrapper[4825]: I0219 00:17:58.825978 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" containerID="cri-o://3977a1de60d33698055567352ee370d0b71d26733409f8b00d78c8c89781f897" gracePeriod=600 Feb 19 00:17:59 crc kubenswrapper[4825]: I0219 00:17:59.931799 4825 generic.go:334] "Generic (PLEG): container finished" podID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerID="3977a1de60d33698055567352ee370d0b71d26733409f8b00d78c8c89781f897" exitCode=0 Feb 19 00:17:59 crc kubenswrapper[4825]: I0219 00:17:59.931887 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerDied","Data":"3977a1de60d33698055567352ee370d0b71d26733409f8b00d78c8c89781f897"} Feb 19 00:17:59 crc kubenswrapper[4825]: I0219 00:17:59.932768 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerStarted","Data":"f4528630abd9298fa6ddba9ae1d069773d3681c2d7b7aa972cb3ffb2f6b64f7c"} Feb 19 00:17:59 crc kubenswrapper[4825]: I0219 00:17:59.932822 4825 scope.go:117] "RemoveContainer" containerID="53baafc0a5b9c1c224604d4119e0e92aee7172ade69be62b2ef5640b6546ae02" Feb 19 00:18:16 crc kubenswrapper[4825]: I0219 00:18:16.169392 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjxd7"] Feb 19 00:18:16 crc kubenswrapper[4825]: I0219 00:18:16.170795 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bjxd7" podUID="af9e7a92-e527-4a28-a75c-fac2d38484d7" containerName="registry-server" containerID="cri-o://ec0d8ae5f2780b01102259b0f5a52e6e6b00f903eb907b3504fa2842dc8b993a" gracePeriod=30 Feb 19 00:18:16 crc kubenswrapper[4825]: I0219 00:18:16.753499 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:18:16 crc kubenswrapper[4825]: I0219 00:18:16.880014 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwbjw\" (UniqueName: \"kubernetes.io/projected/af9e7a92-e527-4a28-a75c-fac2d38484d7-kube-api-access-pwbjw\") pod \"af9e7a92-e527-4a28-a75c-fac2d38484d7\" (UID: \"af9e7a92-e527-4a28-a75c-fac2d38484d7\") " Feb 19 00:18:16 crc kubenswrapper[4825]: I0219 00:18:16.880159 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af9e7a92-e527-4a28-a75c-fac2d38484d7-utilities\") pod \"af9e7a92-e527-4a28-a75c-fac2d38484d7\" (UID: \"af9e7a92-e527-4a28-a75c-fac2d38484d7\") " Feb 19 00:18:16 crc kubenswrapper[4825]: I0219 00:18:16.880264 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af9e7a92-e527-4a28-a75c-fac2d38484d7-catalog-content\") pod \"af9e7a92-e527-4a28-a75c-fac2d38484d7\" (UID: \"af9e7a92-e527-4a28-a75c-fac2d38484d7\") " Feb 19 00:18:16 crc kubenswrapper[4825]: I0219 00:18:16.881143 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af9e7a92-e527-4a28-a75c-fac2d38484d7-utilities" (OuterVolumeSpecName: "utilities") pod "af9e7a92-e527-4a28-a75c-fac2d38484d7" (UID: "af9e7a92-e527-4a28-a75c-fac2d38484d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:16 crc kubenswrapper[4825]: I0219 00:18:16.885480 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af9e7a92-e527-4a28-a75c-fac2d38484d7-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:16 crc kubenswrapper[4825]: I0219 00:18:16.887351 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af9e7a92-e527-4a28-a75c-fac2d38484d7-kube-api-access-pwbjw" (OuterVolumeSpecName: "kube-api-access-pwbjw") pod "af9e7a92-e527-4a28-a75c-fac2d38484d7" (UID: "af9e7a92-e527-4a28-a75c-fac2d38484d7"). InnerVolumeSpecName "kube-api-access-pwbjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:18:16 crc kubenswrapper[4825]: I0219 00:18:16.904233 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af9e7a92-e527-4a28-a75c-fac2d38484d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af9e7a92-e527-4a28-a75c-fac2d38484d7" (UID: "af9e7a92-e527-4a28-a75c-fac2d38484d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:16 crc kubenswrapper[4825]: I0219 00:18:16.987231 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwbjw\" (UniqueName: \"kubernetes.io/projected/af9e7a92-e527-4a28-a75c-fac2d38484d7-kube-api-access-pwbjw\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:16 crc kubenswrapper[4825]: I0219 00:18:16.987271 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af9e7a92-e527-4a28-a75c-fac2d38484d7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:17 crc kubenswrapper[4825]: I0219 00:18:17.055302 4825 generic.go:334] "Generic (PLEG): container finished" podID="af9e7a92-e527-4a28-a75c-fac2d38484d7" containerID="ec0d8ae5f2780b01102259b0f5a52e6e6b00f903eb907b3504fa2842dc8b993a" exitCode=0 Feb 19 00:18:17 crc kubenswrapper[4825]: I0219 00:18:17.055354 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjxd7" event={"ID":"af9e7a92-e527-4a28-a75c-fac2d38484d7","Type":"ContainerDied","Data":"ec0d8ae5f2780b01102259b0f5a52e6e6b00f903eb907b3504fa2842dc8b993a"} Feb 19 00:18:17 crc kubenswrapper[4825]: I0219 00:18:17.055397 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjxd7" event={"ID":"af9e7a92-e527-4a28-a75c-fac2d38484d7","Type":"ContainerDied","Data":"c22f488c58ee5e816d69693d8f1145e126eaec175d7c1eb5218704ba3df2509f"} Feb 19 00:18:17 crc kubenswrapper[4825]: I0219 00:18:17.055402 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjxd7" Feb 19 00:18:17 crc kubenswrapper[4825]: I0219 00:18:17.055421 4825 scope.go:117] "RemoveContainer" containerID="ec0d8ae5f2780b01102259b0f5a52e6e6b00f903eb907b3504fa2842dc8b993a" Feb 19 00:18:17 crc kubenswrapper[4825]: I0219 00:18:17.079588 4825 scope.go:117] "RemoveContainer" containerID="b8d050772ae770e0e35ada5bb1ab2f716c753bfef3a1ed3b55016c52dd3dc613" Feb 19 00:18:17 crc kubenswrapper[4825]: I0219 00:18:17.088481 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjxd7"] Feb 19 00:18:17 crc kubenswrapper[4825]: I0219 00:18:17.095215 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjxd7"] Feb 19 00:18:17 crc kubenswrapper[4825]: I0219 00:18:17.103578 4825 scope.go:117] "RemoveContainer" containerID="db3640d2a86673752bf48450cb5630eee9ab2214263b66dbd578eef9e4f6e11d" Feb 19 00:18:17 crc kubenswrapper[4825]: I0219 00:18:17.135012 4825 scope.go:117] "RemoveContainer" containerID="ec0d8ae5f2780b01102259b0f5a52e6e6b00f903eb907b3504fa2842dc8b993a" Feb 19 00:18:17 crc kubenswrapper[4825]: E0219 00:18:17.135610 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec0d8ae5f2780b01102259b0f5a52e6e6b00f903eb907b3504fa2842dc8b993a\": container with ID starting with ec0d8ae5f2780b01102259b0f5a52e6e6b00f903eb907b3504fa2842dc8b993a not found: ID does not exist" containerID="ec0d8ae5f2780b01102259b0f5a52e6e6b00f903eb907b3504fa2842dc8b993a" Feb 19 00:18:17 crc kubenswrapper[4825]: I0219 00:18:17.135660 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec0d8ae5f2780b01102259b0f5a52e6e6b00f903eb907b3504fa2842dc8b993a"} err="failed to get container status \"ec0d8ae5f2780b01102259b0f5a52e6e6b00f903eb907b3504fa2842dc8b993a\": rpc error: code = NotFound desc = could not find container \"ec0d8ae5f2780b01102259b0f5a52e6e6b00f903eb907b3504fa2842dc8b993a\": container with ID starting with ec0d8ae5f2780b01102259b0f5a52e6e6b00f903eb907b3504fa2842dc8b993a not found: ID does not exist" Feb 19 00:18:17 crc kubenswrapper[4825]: I0219 00:18:17.135694 4825 scope.go:117] "RemoveContainer" containerID="b8d050772ae770e0e35ada5bb1ab2f716c753bfef3a1ed3b55016c52dd3dc613" Feb 19 00:18:17 crc kubenswrapper[4825]: E0219 00:18:17.136013 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8d050772ae770e0e35ada5bb1ab2f716c753bfef3a1ed3b55016c52dd3dc613\": container with ID starting with b8d050772ae770e0e35ada5bb1ab2f716c753bfef3a1ed3b55016c52dd3dc613 not found: ID does not exist" containerID="b8d050772ae770e0e35ada5bb1ab2f716c753bfef3a1ed3b55016c52dd3dc613" Feb 19 00:18:17 crc kubenswrapper[4825]: I0219 00:18:17.136062 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8d050772ae770e0e35ada5bb1ab2f716c753bfef3a1ed3b55016c52dd3dc613"} err="failed to get container status \"b8d050772ae770e0e35ada5bb1ab2f716c753bfef3a1ed3b55016c52dd3dc613\": rpc error: code = NotFound desc = could not find container \"b8d050772ae770e0e35ada5bb1ab2f716c753bfef3a1ed3b55016c52dd3dc613\": container with ID starting with b8d050772ae770e0e35ada5bb1ab2f716c753bfef3a1ed3b55016c52dd3dc613 not found: ID does not exist" Feb 19 00:18:17 crc kubenswrapper[4825]: I0219 00:18:17.136094 4825 scope.go:117] "RemoveContainer" containerID="db3640d2a86673752bf48450cb5630eee9ab2214263b66dbd578eef9e4f6e11d" Feb 19 00:18:17 crc kubenswrapper[4825]: E0219 00:18:17.136742 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db3640d2a86673752bf48450cb5630eee9ab2214263b66dbd578eef9e4f6e11d\": container with ID starting with db3640d2a86673752bf48450cb5630eee9ab2214263b66dbd578eef9e4f6e11d not found: ID does not exist" containerID="db3640d2a86673752bf48450cb5630eee9ab2214263b66dbd578eef9e4f6e11d" Feb 19 00:18:17 crc kubenswrapper[4825]: I0219 00:18:17.136798 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db3640d2a86673752bf48450cb5630eee9ab2214263b66dbd578eef9e4f6e11d"} err="failed to get container status \"db3640d2a86673752bf48450cb5630eee9ab2214263b66dbd578eef9e4f6e11d\": rpc error: code = NotFound desc = could not find container \"db3640d2a86673752bf48450cb5630eee9ab2214263b66dbd578eef9e4f6e11d\": container with ID starting with db3640d2a86673752bf48450cb5630eee9ab2214263b66dbd578eef9e4f6e11d not found: ID does not exist" Feb 19 00:18:19 crc kubenswrapper[4825]: I0219 00:18:19.072374 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af9e7a92-e527-4a28-a75c-fac2d38484d7" path="/var/lib/kubelet/pods/af9e7a92-e527-4a28-a75c-fac2d38484d7/volumes" Feb 19 00:18:19 crc kubenswrapper[4825]: I0219 00:18:19.982266 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv"] Feb 19 00:18:19 crc kubenswrapper[4825]: E0219 00:18:19.982521 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9e7a92-e527-4a28-a75c-fac2d38484d7" containerName="registry-server" Feb 19 00:18:19 crc kubenswrapper[4825]: I0219 00:18:19.982535 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9e7a92-e527-4a28-a75c-fac2d38484d7" containerName="registry-server" Feb 19 00:18:19 crc kubenswrapper[4825]: E0219 00:18:19.982548 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9e7a92-e527-4a28-a75c-fac2d38484d7" containerName="extract-utilities" Feb 19 00:18:19 crc kubenswrapper[4825]: I0219 00:18:19.982554 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9e7a92-e527-4a28-a75c-fac2d38484d7" containerName="extract-utilities" Feb 19 00:18:19 crc kubenswrapper[4825]: E0219 00:18:19.982567 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9e7a92-e527-4a28-a75c-fac2d38484d7" containerName="extract-content" Feb 19 00:18:19 crc kubenswrapper[4825]: I0219 00:18:19.982573 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9e7a92-e527-4a28-a75c-fac2d38484d7" containerName="extract-content" Feb 19 00:18:19 crc kubenswrapper[4825]: I0219 00:18:19.982655 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="af9e7a92-e527-4a28-a75c-fac2d38484d7" containerName="registry-server" Feb 19 00:18:19 crc kubenswrapper[4825]: I0219 00:18:19.983493 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" Feb 19 00:18:19 crc kubenswrapper[4825]: I0219 00:18:19.986480 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 00:18:19 crc kubenswrapper[4825]: I0219 00:18:19.992788 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv"] Feb 19 00:18:20 crc kubenswrapper[4825]: I0219 00:18:20.034827 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7b48e68-4fea-48b5-b0c8-408af47180f5-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv\" (UID: \"c7b48e68-4fea-48b5-b0c8-408af47180f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" Feb 19 00:18:20 crc kubenswrapper[4825]: I0219 00:18:20.034880 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7b48e68-4fea-48b5-b0c8-408af47180f5-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv\" (UID: \"c7b48e68-4fea-48b5-b0c8-408af47180f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" Feb 19 00:18:20 crc kubenswrapper[4825]: I0219 00:18:20.035124 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwlhz\" (UniqueName: \"kubernetes.io/projected/c7b48e68-4fea-48b5-b0c8-408af47180f5-kube-api-access-wwlhz\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv\" (UID: \"c7b48e68-4fea-48b5-b0c8-408af47180f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" Feb 19 00:18:20 crc kubenswrapper[4825]: I0219 00:18:20.135658 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7b48e68-4fea-48b5-b0c8-408af47180f5-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv\" (UID: \"c7b48e68-4fea-48b5-b0c8-408af47180f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" Feb 19 00:18:20 crc kubenswrapper[4825]: I0219 00:18:20.135744 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwlhz\" (UniqueName: \"kubernetes.io/projected/c7b48e68-4fea-48b5-b0c8-408af47180f5-kube-api-access-wwlhz\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv\" (UID: \"c7b48e68-4fea-48b5-b0c8-408af47180f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" Feb 19 00:18:20 crc kubenswrapper[4825]: I0219 00:18:20.135788 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7b48e68-4fea-48b5-b0c8-408af47180f5-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv\" (UID: \"c7b48e68-4fea-48b5-b0c8-408af47180f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" Feb 19 00:18:20 crc kubenswrapper[4825]: I0219 00:18:20.136324 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7b48e68-4fea-48b5-b0c8-408af47180f5-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv\" (UID: \"c7b48e68-4fea-48b5-b0c8-408af47180f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" Feb 19 00:18:20 crc kubenswrapper[4825]: I0219 00:18:20.136346 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7b48e68-4fea-48b5-b0c8-408af47180f5-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv\" (UID: \"c7b48e68-4fea-48b5-b0c8-408af47180f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" Feb 19 00:18:20 crc kubenswrapper[4825]: I0219 00:18:20.154135 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwlhz\" (UniqueName: \"kubernetes.io/projected/c7b48e68-4fea-48b5-b0c8-408af47180f5-kube-api-access-wwlhz\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv\" (UID: \"c7b48e68-4fea-48b5-b0c8-408af47180f5\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" Feb 19 00:18:20 crc kubenswrapper[4825]: I0219 00:18:20.307320 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" Feb 19 00:18:20 crc kubenswrapper[4825]: I0219 00:18:20.480804 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv"] Feb 19 00:18:21 crc kubenswrapper[4825]: I0219 00:18:21.083627 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" event={"ID":"c7b48e68-4fea-48b5-b0c8-408af47180f5","Type":"ContainerStarted","Data":"9e831ab909b666b52f66d19b8bc78a475b651b655a91c36f0da7f6a6e19027ff"} Feb 19 00:18:21 crc kubenswrapper[4825]: I0219 00:18:21.084186 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" event={"ID":"c7b48e68-4fea-48b5-b0c8-408af47180f5","Type":"ContainerStarted","Data":"82efdefe9ca787d92b3b9d0bc7de7b3c4abca52288a650e5131e52c6d4fa91dc"} Feb 19 00:18:23 crc kubenswrapper[4825]: I0219 00:18:23.096429 4825 generic.go:334] "Generic (PLEG): container finished" podID="c7b48e68-4fea-48b5-b0c8-408af47180f5" containerID="9e831ab909b666b52f66d19b8bc78a475b651b655a91c36f0da7f6a6e19027ff" exitCode=0 Feb 19 00:18:23 crc kubenswrapper[4825]: I0219 00:18:23.096489 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" event={"ID":"c7b48e68-4fea-48b5-b0c8-408af47180f5","Type":"ContainerDied","Data":"9e831ab909b666b52f66d19b8bc78a475b651b655a91c36f0da7f6a6e19027ff"} Feb 19 00:18:23 crc kubenswrapper[4825]: I0219 00:18:23.098246 4825 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:18:25 crc kubenswrapper[4825]: I0219 00:18:25.408369 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87"] Feb 19 00:18:25 crc kubenswrapper[4825]: I0219 00:18:25.410365 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" Feb 19 00:18:25 crc kubenswrapper[4825]: I0219 00:18:25.419933 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87"] Feb 19 00:18:25 crc kubenswrapper[4825]: I0219 00:18:25.538155 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87\" (UID: \"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" Feb 19 00:18:25 crc kubenswrapper[4825]: I0219 00:18:25.538219 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkpfj\" (UniqueName: \"kubernetes.io/projected/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-kube-api-access-tkpfj\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87\" (UID: \"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" Feb 19 00:18:25 crc kubenswrapper[4825]: I0219 00:18:25.538268 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87\" (UID: \"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" Feb 19 00:18:25 crc kubenswrapper[4825]: I0219 00:18:25.639981 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87\" (UID: \"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" Feb 19 00:18:25 crc kubenswrapper[4825]: I0219 00:18:25.640034 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkpfj\" (UniqueName: \"kubernetes.io/projected/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-kube-api-access-tkpfj\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87\" (UID: \"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" Feb 19 00:18:25 crc kubenswrapper[4825]: I0219 00:18:25.640086 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87\" (UID: \"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" Feb 19 00:18:25 crc kubenswrapper[4825]: I0219 00:18:25.640815 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87\" (UID: \"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" Feb 19 00:18:25 crc kubenswrapper[4825]: I0219 00:18:25.640854 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87\" (UID: \"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" Feb 19 00:18:25 crc kubenswrapper[4825]: I0219 00:18:25.675192 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkpfj\" (UniqueName: \"kubernetes.io/projected/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-kube-api-access-tkpfj\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87\" (UID: \"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" Feb 19 00:18:25 crc kubenswrapper[4825]: I0219 00:18:25.743610 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" Feb 19 00:18:26 crc kubenswrapper[4825]: I0219 00:18:26.283525 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87"] Feb 19 00:18:26 crc kubenswrapper[4825]: W0219 00:18:26.297918 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fdb85b1_37d5_4a3d_8334_a83c7a8bf6ed.slice/crio-0f46a3cd43cdbdf97762c12bc29d693fb203af863c676d0bbff4f0c24b4503a2 WatchSource:0}: Error finding container 0f46a3cd43cdbdf97762c12bc29d693fb203af863c676d0bbff4f0c24b4503a2: Status 404 returned error can't find the container with id 0f46a3cd43cdbdf97762c12bc29d693fb203af863c676d0bbff4f0c24b4503a2 Feb 19 00:18:27 crc kubenswrapper[4825]: I0219 00:18:27.125521 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" event={"ID":"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed","Type":"ContainerStarted","Data":"0f46a3cd43cdbdf97762c12bc29d693fb203af863c676d0bbff4f0c24b4503a2"} Feb 19 00:18:27 crc kubenswrapper[4825]: I0219 00:18:27.128703 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" event={"ID":"c7b48e68-4fea-48b5-b0c8-408af47180f5","Type":"ContainerStarted","Data":"4c92fd5d418cac39611fd5e590c50b2b706d6147c6f7ddb3808f76ebe85e6425"} Feb 19 00:18:28 crc kubenswrapper[4825]: I0219 00:18:28.139564 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" event={"ID":"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed","Type":"ContainerDied","Data":"af84f3c71d34faf3f654df3ed2780ec1a71b39df7a0ce263c436e79368a8a19a"} Feb 19 00:18:28 crc kubenswrapper[4825]: I0219 00:18:28.139292 4825 generic.go:334] "Generic (PLEG): container finished" podID="0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed" containerID="af84f3c71d34faf3f654df3ed2780ec1a71b39df7a0ce263c436e79368a8a19a" exitCode=0 Feb 19 00:18:28 crc kubenswrapper[4825]: I0219 00:18:28.143842 4825 generic.go:334] "Generic (PLEG): container finished" podID="c7b48e68-4fea-48b5-b0c8-408af47180f5" containerID="4c92fd5d418cac39611fd5e590c50b2b706d6147c6f7ddb3808f76ebe85e6425" exitCode=0 Feb 19 00:18:28 crc kubenswrapper[4825]: I0219 00:18:28.143886 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" event={"ID":"c7b48e68-4fea-48b5-b0c8-408af47180f5","Type":"ContainerDied","Data":"4c92fd5d418cac39611fd5e590c50b2b706d6147c6f7ddb3808f76ebe85e6425"} Feb 19 00:18:28 crc kubenswrapper[4825]: I0219 00:18:28.983660 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6"] Feb 19 00:18:28 crc kubenswrapper[4825]: I0219 00:18:28.984841 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" Feb 19 00:18:28 crc kubenswrapper[4825]: I0219 00:18:28.994944 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6"] Feb 19 00:18:29 crc kubenswrapper[4825]: I0219 00:18:29.093889 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6\" (UID: \"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" Feb 19 00:18:29 crc kubenswrapper[4825]: I0219 00:18:29.093978 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvmht\" (UniqueName: \"kubernetes.io/projected/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-kube-api-access-nvmht\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6\" (UID: \"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" Feb 19 00:18:29 crc kubenswrapper[4825]: I0219 00:18:29.094015 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6\" (UID: \"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" Feb 19 00:18:29 crc kubenswrapper[4825]: I0219 00:18:29.150858 4825 generic.go:334] "Generic (PLEG): container finished" podID="c7b48e68-4fea-48b5-b0c8-408af47180f5" containerID="2eecab7e8e3c83ddbd3a06cebbf3840a8c8a0e529d139581f61894544df08aaa" exitCode=0 Feb 19 00:18:29 crc kubenswrapper[4825]: I0219 00:18:29.150913 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" event={"ID":"c7b48e68-4fea-48b5-b0c8-408af47180f5","Type":"ContainerDied","Data":"2eecab7e8e3c83ddbd3a06cebbf3840a8c8a0e529d139581f61894544df08aaa"} Feb 19 00:18:29 crc kubenswrapper[4825]: I0219 00:18:29.195482 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6\" (UID: \"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" Feb 19 00:18:29 crc kubenswrapper[4825]: I0219 00:18:29.195651 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6\" (UID: \"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" Feb 19 00:18:29 crc kubenswrapper[4825]: I0219 00:18:29.195715 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvmht\" (UniqueName: \"kubernetes.io/projected/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-kube-api-access-nvmht\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6\" (UID: \"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" Feb 19 00:18:29 crc kubenswrapper[4825]: I0219 00:18:29.196079 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6\" (UID: \"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" Feb 19 00:18:29 crc kubenswrapper[4825]: I0219 00:18:29.196094 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6\" (UID: \"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" Feb 19 00:18:29 crc kubenswrapper[4825]: I0219 00:18:29.217891 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvmht\" (UniqueName: \"kubernetes.io/projected/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-kube-api-access-nvmht\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6\" (UID: \"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" Feb 19 00:18:29 crc kubenswrapper[4825]: I0219 00:18:29.312580 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" Feb 19 00:18:29 crc kubenswrapper[4825]: I0219 00:18:29.988983 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6"] Feb 19 00:18:29 crc kubenswrapper[4825]: W0219 00:18:29.996262 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2acfe766_3f1e_4dad_a8c8_cbb88c3314ec.slice/crio-b0e2da6043a72d52fa4ab4f6616813ca87165417e36e6a42a85c46bbc53e72a6 WatchSource:0}: Error finding container b0e2da6043a72d52fa4ab4f6616813ca87165417e36e6a42a85c46bbc53e72a6: Status 404 returned error can't find the container with id b0e2da6043a72d52fa4ab4f6616813ca87165417e36e6a42a85c46bbc53e72a6 Feb 19 00:18:30 crc kubenswrapper[4825]: I0219 00:18:30.163987 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" event={"ID":"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec","Type":"ContainerStarted","Data":"b0e2da6043a72d52fa4ab4f6616813ca87165417e36e6a42a85c46bbc53e72a6"} Feb 19 00:18:30 crc kubenswrapper[4825]: I0219 00:18:30.539226 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" Feb 19 00:18:30 crc kubenswrapper[4825]: I0219 00:18:30.616499 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7b48e68-4fea-48b5-b0c8-408af47180f5-util\") pod \"c7b48e68-4fea-48b5-b0c8-408af47180f5\" (UID: \"c7b48e68-4fea-48b5-b0c8-408af47180f5\") " Feb 19 00:18:30 crc kubenswrapper[4825]: I0219 00:18:30.617209 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7b48e68-4fea-48b5-b0c8-408af47180f5-bundle\") pod \"c7b48e68-4fea-48b5-b0c8-408af47180f5\" (UID: \"c7b48e68-4fea-48b5-b0c8-408af47180f5\") " Feb 19 00:18:30 crc kubenswrapper[4825]: I0219 00:18:30.617244 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwlhz\" (UniqueName: \"kubernetes.io/projected/c7b48e68-4fea-48b5-b0c8-408af47180f5-kube-api-access-wwlhz\") pod \"c7b48e68-4fea-48b5-b0c8-408af47180f5\" (UID: \"c7b48e68-4fea-48b5-b0c8-408af47180f5\") " Feb 19 00:18:30 crc kubenswrapper[4825]: I0219 00:18:30.620229 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b48e68-4fea-48b5-b0c8-408af47180f5-bundle" (OuterVolumeSpecName: "bundle") pod "c7b48e68-4fea-48b5-b0c8-408af47180f5" (UID: "c7b48e68-4fea-48b5-b0c8-408af47180f5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:30 crc kubenswrapper[4825]: I0219 00:18:30.624259 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b48e68-4fea-48b5-b0c8-408af47180f5-kube-api-access-wwlhz" (OuterVolumeSpecName: "kube-api-access-wwlhz") pod "c7b48e68-4fea-48b5-b0c8-408af47180f5" (UID: "c7b48e68-4fea-48b5-b0c8-408af47180f5"). InnerVolumeSpecName "kube-api-access-wwlhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:18:30 crc kubenswrapper[4825]: I0219 00:18:30.630818 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b48e68-4fea-48b5-b0c8-408af47180f5-util" (OuterVolumeSpecName: "util") pod "c7b48e68-4fea-48b5-b0c8-408af47180f5" (UID: "c7b48e68-4fea-48b5-b0c8-408af47180f5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:30 crc kubenswrapper[4825]: I0219 00:18:30.718406 4825 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7b48e68-4fea-48b5-b0c8-408af47180f5-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:30 crc kubenswrapper[4825]: I0219 00:18:30.718459 4825 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7b48e68-4fea-48b5-b0c8-408af47180f5-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:30 crc kubenswrapper[4825]: I0219 00:18:30.718477 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwlhz\" (UniqueName: \"kubernetes.io/projected/c7b48e68-4fea-48b5-b0c8-408af47180f5-kube-api-access-wwlhz\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:31 crc kubenswrapper[4825]: I0219 00:18:31.174097 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" event={"ID":"c7b48e68-4fea-48b5-b0c8-408af47180f5","Type":"ContainerDied","Data":"82efdefe9ca787d92b3b9d0bc7de7b3c4abca52288a650e5131e52c6d4fa91dc"} Feb 19 00:18:31 crc kubenswrapper[4825]: I0219 00:18:31.174149 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82efdefe9ca787d92b3b9d0bc7de7b3c4abca52288a650e5131e52c6d4fa91dc" Feb 19 00:18:31 crc kubenswrapper[4825]: I0219 00:18:31.174235 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv" Feb 19 00:18:31 crc kubenswrapper[4825]: I0219 00:18:31.176404 4825 generic.go:334] "Generic (PLEG): container finished" podID="2acfe766-3f1e-4dad-a8c8-cbb88c3314ec" containerID="0b348f4f8071bdeaaf8fc26610fd10bd949652aadda11613e07fc806d7cd82d3" exitCode=0 Feb 19 00:18:31 crc kubenswrapper[4825]: I0219 00:18:31.176465 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" event={"ID":"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec","Type":"ContainerDied","Data":"0b348f4f8071bdeaaf8fc26610fd10bd949652aadda11613e07fc806d7cd82d3"} Feb 19 00:18:31 crc kubenswrapper[4825]: I0219 00:18:31.183138 4825 generic.go:334] "Generic (PLEG): container finished" podID="0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed" containerID="79e099d44d94f59f82f1938a31782ce00525c83ba5c231af0804a00920141c12" exitCode=0 Feb 19 00:18:31 crc kubenswrapper[4825]: I0219 00:18:31.183193 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" event={"ID":"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed","Type":"ContainerDied","Data":"79e099d44d94f59f82f1938a31782ce00525c83ba5c231af0804a00920141c12"} Feb 19 00:18:32 crc kubenswrapper[4825]: I0219 00:18:32.200000 4825 generic.go:334] "Generic (PLEG): container finished" podID="0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed" containerID="ea3b180ca4c0add82f1f4b7d54037524b3234e8b28425d6045309af784c652d0" exitCode=0 Feb 19 00:18:32 crc kubenswrapper[4825]: I0219 00:18:32.200262 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" event={"ID":"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed","Type":"ContainerDied","Data":"ea3b180ca4c0add82f1f4b7d54037524b3234e8b28425d6045309af784c652d0"} Feb 19 00:18:33 crc kubenswrapper[4825]: I0219 00:18:33.494534 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" Feb 19 00:18:33 crc kubenswrapper[4825]: I0219 00:18:33.658736 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkpfj\" (UniqueName: \"kubernetes.io/projected/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-kube-api-access-tkpfj\") pod \"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed\" (UID: \"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed\") " Feb 19 00:18:33 crc kubenswrapper[4825]: I0219 00:18:33.658855 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-util\") pod \"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed\" (UID: \"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed\") " Feb 19 00:18:33 crc kubenswrapper[4825]: I0219 00:18:33.658927 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-bundle\") pod \"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed\" (UID: \"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed\") " Feb 19 00:18:33 crc kubenswrapper[4825]: I0219 00:18:33.659975 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-bundle" (OuterVolumeSpecName: "bundle") pod "0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed" (UID: "0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:33 crc kubenswrapper[4825]: I0219 00:18:33.675670 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-util" (OuterVolumeSpecName: "util") pod "0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed" (UID: "0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:33 crc kubenswrapper[4825]: I0219 00:18:33.681772 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-kube-api-access-tkpfj" (OuterVolumeSpecName: "kube-api-access-tkpfj") pod "0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed" (UID: "0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed"). InnerVolumeSpecName "kube-api-access-tkpfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:18:33 crc kubenswrapper[4825]: I0219 00:18:33.759993 4825 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:33 crc kubenswrapper[4825]: I0219 00:18:33.760435 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkpfj\" (UniqueName: \"kubernetes.io/projected/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-kube-api-access-tkpfj\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:33 crc kubenswrapper[4825]: I0219 00:18:33.760449 4825 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:34 crc kubenswrapper[4825]: I0219 00:18:34.220530 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" event={"ID":"0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed","Type":"ContainerDied","Data":"0f46a3cd43cdbdf97762c12bc29d693fb203af863c676d0bbff4f0c24b4503a2"} Feb 19 00:18:34 crc kubenswrapper[4825]: I0219 00:18:34.220579 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f46a3cd43cdbdf97762c12bc29d693fb203af863c676d0bbff4f0c24b4503a2" Feb 19 00:18:34 crc kubenswrapper[4825]: I0219 00:18:34.220627 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87" Feb 19 00:18:37 crc kubenswrapper[4825]: I0219 00:18:37.255154 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" event={"ID":"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec","Type":"ContainerStarted","Data":"73f034e53538fa8a00082dd5a01a613b393ee84b334fcc8966397abeb503ce48"} Feb 19 00:18:38 crc kubenswrapper[4825]: I0219 00:18:38.262629 4825 generic.go:334] "Generic (PLEG): container finished" podID="2acfe766-3f1e-4dad-a8c8-cbb88c3314ec" containerID="73f034e53538fa8a00082dd5a01a613b393ee84b334fcc8966397abeb503ce48" exitCode=0 Feb 19 00:18:38 crc kubenswrapper[4825]: I0219 00:18:38.262700 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" event={"ID":"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec","Type":"ContainerDied","Data":"73f034e53538fa8a00082dd5a01a613b393ee84b334fcc8966397abeb503ce48"} Feb 19 00:18:39 crc kubenswrapper[4825]: I0219 00:18:39.271029 4825 generic.go:334] "Generic (PLEG): container finished" podID="2acfe766-3f1e-4dad-a8c8-cbb88c3314ec" containerID="43254a932ce749fc69ad9e0f07adba406438dd101d14c8b427055f815f658234" exitCode=0 Feb 19 00:18:39 crc kubenswrapper[4825]: I0219 00:18:39.271099 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" event={"ID":"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec","Type":"ContainerDied","Data":"43254a932ce749fc69ad9e0f07adba406438dd101d14c8b427055f815f658234"} Feb 19 00:18:40 crc kubenswrapper[4825]: I0219 00:18:40.598727 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" Feb 19 00:18:40 crc kubenswrapper[4825]: I0219 00:18:40.687918 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-util\") pod \"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec\" (UID: \"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec\") " Feb 19 00:18:40 crc kubenswrapper[4825]: I0219 00:18:40.688017 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvmht\" (UniqueName: \"kubernetes.io/projected/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-kube-api-access-nvmht\") pod \"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec\" (UID: \"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec\") " Feb 19 00:18:40 crc kubenswrapper[4825]: I0219 00:18:40.688089 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-bundle\") pod \"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec\" (UID: \"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec\") " Feb 19 00:18:40 crc kubenswrapper[4825]: I0219 00:18:40.689333 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-bundle" (OuterVolumeSpecName: "bundle") pod "2acfe766-3f1e-4dad-a8c8-cbb88c3314ec" (UID: "2acfe766-3f1e-4dad-a8c8-cbb88c3314ec"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:40 crc kubenswrapper[4825]: I0219 00:18:40.697740 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-kube-api-access-nvmht" (OuterVolumeSpecName: "kube-api-access-nvmht") pod "2acfe766-3f1e-4dad-a8c8-cbb88c3314ec" (UID: "2acfe766-3f1e-4dad-a8c8-cbb88c3314ec"). InnerVolumeSpecName "kube-api-access-nvmht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:18:40 crc kubenswrapper[4825]: I0219 00:18:40.714281 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-util" (OuterVolumeSpecName: "util") pod "2acfe766-3f1e-4dad-a8c8-cbb88c3314ec" (UID: "2acfe766-3f1e-4dad-a8c8-cbb88c3314ec"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:40 crc kubenswrapper[4825]: I0219 00:18:40.789635 4825 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:40 crc kubenswrapper[4825]: I0219 00:18:40.789682 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvmht\" (UniqueName: \"kubernetes.io/projected/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-kube-api-access-nvmht\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:40 crc kubenswrapper[4825]: I0219 00:18:40.789696 4825 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2acfe766-3f1e-4dad-a8c8-cbb88c3314ec-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.048239 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-l52fp"] Feb 19 00:18:41 crc kubenswrapper[4825]: E0219 00:18:41.048500 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b48e68-4fea-48b5-b0c8-408af47180f5" containerName="pull" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.048537 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b48e68-4fea-48b5-b0c8-408af47180f5" containerName="pull" Feb 19 00:18:41 crc kubenswrapper[4825]: E0219 00:18:41.048547 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2acfe766-3f1e-4dad-a8c8-cbb88c3314ec" containerName="util" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.048553 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="2acfe766-3f1e-4dad-a8c8-cbb88c3314ec" containerName="util" Feb 19 00:18:41 crc kubenswrapper[4825]: E0219 00:18:41.048565 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b48e68-4fea-48b5-b0c8-408af47180f5" containerName="extract" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.048572 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b48e68-4fea-48b5-b0c8-408af47180f5" containerName="extract" Feb 19 00:18:41 crc kubenswrapper[4825]: E0219 00:18:41.048582 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2acfe766-3f1e-4dad-a8c8-cbb88c3314ec" containerName="pull" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.048588 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="2acfe766-3f1e-4dad-a8c8-cbb88c3314ec" containerName="pull" Feb 19 00:18:41 crc kubenswrapper[4825]: E0219 00:18:41.048596 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed" containerName="pull" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.048602 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed" containerName="pull" Feb 19 00:18:41 crc kubenswrapper[4825]: E0219 00:18:41.048614 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b48e68-4fea-48b5-b0c8-408af47180f5" containerName="util" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.048619 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b48e68-4fea-48b5-b0c8-408af47180f5" containerName="util" Feb 19 00:18:41 crc kubenswrapper[4825]: E0219 00:18:41.048626 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed" containerName="extract" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.048631 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed" containerName="extract" Feb 19 00:18:41 crc kubenswrapper[4825]: E0219 00:18:41.048640 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2acfe766-3f1e-4dad-a8c8-cbb88c3314ec" containerName="extract" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.048646 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="2acfe766-3f1e-4dad-a8c8-cbb88c3314ec" containerName="extract" Feb 19 00:18:41 crc kubenswrapper[4825]: E0219 00:18:41.048656 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed" containerName="util" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.048662 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed" containerName="util" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.048744 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="2acfe766-3f1e-4dad-a8c8-cbb88c3314ec" containerName="extract" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.048754 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b48e68-4fea-48b5-b0c8-408af47180f5" containerName="extract" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.048770 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed" containerName="extract" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.049293 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-l52fp" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.051091 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.052061 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-jwnbs" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.064060 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.083880 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-l52fp"] Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.093194 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnhmq\" (UniqueName: \"kubernetes.io/projected/d8b8a9db-14c5-4cbe-8673-3ac90c2b6749-kube-api-access-hnhmq\") pod \"obo-prometheus-operator-68bc856cb9-l52fp\" (UID: \"d8b8a9db-14c5-4cbe-8673-3ac90c2b6749\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-l52fp" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.194954 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnhmq\" (UniqueName: \"kubernetes.io/projected/d8b8a9db-14c5-4cbe-8673-3ac90c2b6749-kube-api-access-hnhmq\") pod \"obo-prometheus-operator-68bc856cb9-l52fp\" (UID: \"d8b8a9db-14c5-4cbe-8673-3ac90c2b6749\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-l52fp" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.208946 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-kkc6r"] Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.209710 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-kkc6r" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.231293 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.231809 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnhmq\" (UniqueName: \"kubernetes.io/projected/d8b8a9db-14c5-4cbe-8673-3ac90c2b6749-kube-api-access-hnhmq\") pod \"obo-prometheus-operator-68bc856cb9-l52fp\" (UID: \"d8b8a9db-14c5-4cbe-8673-3ac90c2b6749\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-l52fp" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.260264 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-5n2h9" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.264675 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-488k5"] Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.265432 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-488k5" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.279282 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-kkc6r"] Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.291499 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" event={"ID":"2acfe766-3f1e-4dad-a8c8-cbb88c3314ec","Type":"ContainerDied","Data":"b0e2da6043a72d52fa4ab4f6616813ca87165417e36e6a42a85c46bbc53e72a6"} Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.291562 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0e2da6043a72d52fa4ab4f6616813ca87165417e36e6a42a85c46bbc53e72a6" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.291637 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.295937 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6608c2fd-be9c-4cb3-93a6-1dcdf8da8555-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57879d559c-kkc6r\" (UID: \"6608c2fd-be9c-4cb3-93a6-1dcdf8da8555\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-kkc6r" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.295976 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6608c2fd-be9c-4cb3-93a6-1dcdf8da8555-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57879d559c-kkc6r\" (UID: \"6608c2fd-be9c-4cb3-93a6-1dcdf8da8555\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-kkc6r" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.295997 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f078aef-685e-4fd4-ab75-d23f8f1cc185-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57879d559c-488k5\" (UID: \"5f078aef-685e-4fd4-ab75-d23f8f1cc185\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-488k5" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.296035 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f078aef-685e-4fd4-ab75-d23f8f1cc185-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57879d559c-488k5\" (UID: \"5f078aef-685e-4fd4-ab75-d23f8f1cc185\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-488k5" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.306984 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-488k5"] Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.384817 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-l52fp" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.398229 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f078aef-685e-4fd4-ab75-d23f8f1cc185-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57879d559c-488k5\" (UID: \"5f078aef-685e-4fd4-ab75-d23f8f1cc185\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-488k5" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.398325 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6608c2fd-be9c-4cb3-93a6-1dcdf8da8555-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57879d559c-kkc6r\" (UID: \"6608c2fd-be9c-4cb3-93a6-1dcdf8da8555\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-kkc6r" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.398351 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6608c2fd-be9c-4cb3-93a6-1dcdf8da8555-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57879d559c-kkc6r\" (UID: \"6608c2fd-be9c-4cb3-93a6-1dcdf8da8555\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-kkc6r" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.398374 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f078aef-685e-4fd4-ab75-d23f8f1cc185-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57879d559c-488k5\" (UID: \"5f078aef-685e-4fd4-ab75-d23f8f1cc185\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-488k5" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.408305 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6608c2fd-be9c-4cb3-93a6-1dcdf8da8555-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57879d559c-kkc6r\" (UID: \"6608c2fd-be9c-4cb3-93a6-1dcdf8da8555\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-kkc6r" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.415102 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5f078aef-685e-4fd4-ab75-d23f8f1cc185-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57879d559c-488k5\" (UID: \"5f078aef-685e-4fd4-ab75-d23f8f1cc185\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-488k5" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.415168 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5f078aef-685e-4fd4-ab75-d23f8f1cc185-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57879d559c-488k5\" (UID: \"5f078aef-685e-4fd4-ab75-d23f8f1cc185\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-488k5" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.432290 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6608c2fd-be9c-4cb3-93a6-1dcdf8da8555-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57879d559c-kkc6r\" (UID: \"6608c2fd-be9c-4cb3-93a6-1dcdf8da8555\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-kkc6r" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.465926 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-g8xlc"] Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.467076 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-g8xlc" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.485222 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-ftsbr" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.485484 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.499181 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4481c24-6ff2-4e86-8889-8910cb81f08b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-g8xlc\" (UID: \"e4481c24-6ff2-4e86-8889-8910cb81f08b\") " pod="openshift-operators/observability-operator-59bdc8b94-g8xlc" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.499229 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r5kb\" (UniqueName: \"kubernetes.io/projected/e4481c24-6ff2-4e86-8889-8910cb81f08b-kube-api-access-7r5kb\") pod \"observability-operator-59bdc8b94-g8xlc\" (UID: \"e4481c24-6ff2-4e86-8889-8910cb81f08b\") " pod="openshift-operators/observability-operator-59bdc8b94-g8xlc" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.522586 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-g8xlc"] Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.523931 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-kkc6r" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.584919 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-488k5" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.601286 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4481c24-6ff2-4e86-8889-8910cb81f08b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-g8xlc\" (UID: \"e4481c24-6ff2-4e86-8889-8910cb81f08b\") " pod="openshift-operators/observability-operator-59bdc8b94-g8xlc" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.601349 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r5kb\" (UniqueName: \"kubernetes.io/projected/e4481c24-6ff2-4e86-8889-8910cb81f08b-kube-api-access-7r5kb\") pod \"observability-operator-59bdc8b94-g8xlc\" (UID: \"e4481c24-6ff2-4e86-8889-8910cb81f08b\") " pod="openshift-operators/observability-operator-59bdc8b94-g8xlc" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.610281 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4481c24-6ff2-4e86-8889-8910cb81f08b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-g8xlc\" (UID: \"e4481c24-6ff2-4e86-8889-8910cb81f08b\") " pod="openshift-operators/observability-operator-59bdc8b94-g8xlc" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.637570 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r5kb\" (UniqueName: \"kubernetes.io/projected/e4481c24-6ff2-4e86-8889-8910cb81f08b-kube-api-access-7r5kb\") pod \"observability-operator-59bdc8b94-g8xlc\" (UID: \"e4481c24-6ff2-4e86-8889-8910cb81f08b\") " pod="openshift-operators/observability-operator-59bdc8b94-g8xlc" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.709497 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-v5f9p"] Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.711903 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-v5f9p" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.715318 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-hhbxl" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.746790 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-v5f9p"] Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.803001 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-g8xlc" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.812130 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/af582d80-5f92-4bf1-9d5e-44ade090c8f9-openshift-service-ca\") pod \"perses-operator-5bf474d74f-v5f9p\" (UID: \"af582d80-5f92-4bf1-9d5e-44ade090c8f9\") " pod="openshift-operators/perses-operator-5bf474d74f-v5f9p" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.812203 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phpqn\" (UniqueName: \"kubernetes.io/projected/af582d80-5f92-4bf1-9d5e-44ade090c8f9-kube-api-access-phpqn\") pod \"perses-operator-5bf474d74f-v5f9p\" (UID: \"af582d80-5f92-4bf1-9d5e-44ade090c8f9\") " pod="openshift-operators/perses-operator-5bf474d74f-v5f9p" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.912901 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/af582d80-5f92-4bf1-9d5e-44ade090c8f9-openshift-service-ca\") pod \"perses-operator-5bf474d74f-v5f9p\" (UID: \"af582d80-5f92-4bf1-9d5e-44ade090c8f9\") " pod="openshift-operators/perses-operator-5bf474d74f-v5f9p" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.913412 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phpqn\" (UniqueName: \"kubernetes.io/projected/af582d80-5f92-4bf1-9d5e-44ade090c8f9-kube-api-access-phpqn\") pod \"perses-operator-5bf474d74f-v5f9p\" (UID: \"af582d80-5f92-4bf1-9d5e-44ade090c8f9\") " pod="openshift-operators/perses-operator-5bf474d74f-v5f9p" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.911942 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-l52fp"] Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.923471 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/af582d80-5f92-4bf1-9d5e-44ade090c8f9-openshift-service-ca\") pod \"perses-operator-5bf474d74f-v5f9p\" (UID: \"af582d80-5f92-4bf1-9d5e-44ade090c8f9\") " pod="openshift-operators/perses-operator-5bf474d74f-v5f9p" Feb 19 00:18:41 crc kubenswrapper[4825]: I0219 00:18:41.947871 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phpqn\" (UniqueName: \"kubernetes.io/projected/af582d80-5f92-4bf1-9d5e-44ade090c8f9-kube-api-access-phpqn\") pod \"perses-operator-5bf474d74f-v5f9p\" (UID: \"af582d80-5f92-4bf1-9d5e-44ade090c8f9\") " pod="openshift-operators/perses-operator-5bf474d74f-v5f9p" Feb 19 00:18:42 crc kubenswrapper[4825]: I0219 00:18:42.070849 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-v5f9p" Feb 19 00:18:42 crc kubenswrapper[4825]: I0219 00:18:42.173462 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-kkc6r"] Feb 19 00:18:42 crc kubenswrapper[4825]: I0219 00:18:42.299145 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-kkc6r" event={"ID":"6608c2fd-be9c-4cb3-93a6-1dcdf8da8555","Type":"ContainerStarted","Data":"1400794a1ae8930f848f3b220e26bf0e75968c20d7374eb30112b9147f059f0b"} Feb 19 00:18:42 crc kubenswrapper[4825]: I0219 00:18:42.299992 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-l52fp" event={"ID":"d8b8a9db-14c5-4cbe-8673-3ac90c2b6749","Type":"ContainerStarted","Data":"051721b811c908c5068dff7d6c813a8f866520e59112f7e9d968c85772d7ca1c"} Feb 19 00:18:42 crc kubenswrapper[4825]: I0219 00:18:42.326901 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-g8xlc"] Feb 19 00:18:42 crc kubenswrapper[4825]: I0219 00:18:42.345828 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-488k5"] Feb 19 00:18:42 crc kubenswrapper[4825]: I0219 00:18:42.500917 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-v5f9p"] Feb 19 00:18:43 crc kubenswrapper[4825]: I0219 00:18:43.310414 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-v5f9p" event={"ID":"af582d80-5f92-4bf1-9d5e-44ade090c8f9","Type":"ContainerStarted","Data":"30afe01ed42e20f973d7dec19ea7ff77f590e0e5fc9d896456d1f888dcb3f8ff"} Feb 19 00:18:43 crc kubenswrapper[4825]: I0219 00:18:43.311946 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-488k5" event={"ID":"5f078aef-685e-4fd4-ab75-d23f8f1cc185","Type":"ContainerStarted","Data":"7c0f1ca1b551d76640ae93bcae5fac05152ec0d96e486e77dcc7f544156e639a"} Feb 19 00:18:43 crc kubenswrapper[4825]: I0219 00:18:43.312731 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-g8xlc" event={"ID":"e4481c24-6ff2-4e86-8889-8910cb81f08b","Type":"ContainerStarted","Data":"0f4cf8c24bc9e188c440545c02c602cc8d9b60607b113067554d370bf7ee959b"} Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.672339 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-84bd846658-vf92k"] Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.673568 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-84bd846658-vf92k" Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.683736 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.683983 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.685733 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.686220 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-6sj6j" Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.696727 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d0f129d2-684b-4418-82e3-78463b8aadd4-webhook-cert\") pod \"elastic-operator-84bd846658-vf92k\" (UID: \"d0f129d2-684b-4418-82e3-78463b8aadd4\") " pod="service-telemetry/elastic-operator-84bd846658-vf92k" Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.696786 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfnts\" (UniqueName: \"kubernetes.io/projected/d0f129d2-684b-4418-82e3-78463b8aadd4-kube-api-access-nfnts\") pod \"elastic-operator-84bd846658-vf92k\" (UID: \"d0f129d2-684b-4418-82e3-78463b8aadd4\") " pod="service-telemetry/elastic-operator-84bd846658-vf92k" Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.697049 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d0f129d2-684b-4418-82e3-78463b8aadd4-apiservice-cert\") pod \"elastic-operator-84bd846658-vf92k\" (UID: \"d0f129d2-684b-4418-82e3-78463b8aadd4\") " pod="service-telemetry/elastic-operator-84bd846658-vf92k" Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.699150 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-84bd846658-vf92k"] Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.801361 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d0f129d2-684b-4418-82e3-78463b8aadd4-webhook-cert\") pod \"elastic-operator-84bd846658-vf92k\" (UID: \"d0f129d2-684b-4418-82e3-78463b8aadd4\") " pod="service-telemetry/elastic-operator-84bd846658-vf92k" Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.801428 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfnts\" (UniqueName: \"kubernetes.io/projected/d0f129d2-684b-4418-82e3-78463b8aadd4-kube-api-access-nfnts\") pod \"elastic-operator-84bd846658-vf92k\" (UID: \"d0f129d2-684b-4418-82e3-78463b8aadd4\") " pod="service-telemetry/elastic-operator-84bd846658-vf92k" Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.801481 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d0f129d2-684b-4418-82e3-78463b8aadd4-apiservice-cert\") pod \"elastic-operator-84bd846658-vf92k\" (UID: \"d0f129d2-684b-4418-82e3-78463b8aadd4\") " pod="service-telemetry/elastic-operator-84bd846658-vf92k" Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.812088 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d0f129d2-684b-4418-82e3-78463b8aadd4-webhook-cert\") pod \"elastic-operator-84bd846658-vf92k\" (UID: \"d0f129d2-684b-4418-82e3-78463b8aadd4\") " pod="service-telemetry/elastic-operator-84bd846658-vf92k" Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.818719 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d0f129d2-684b-4418-82e3-78463b8aadd4-apiservice-cert\") pod \"elastic-operator-84bd846658-vf92k\" (UID: \"d0f129d2-684b-4418-82e3-78463b8aadd4\") " pod="service-telemetry/elastic-operator-84bd846658-vf92k" Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.838567 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfnts\" (UniqueName: \"kubernetes.io/projected/d0f129d2-684b-4418-82e3-78463b8aadd4-kube-api-access-nfnts\") pod \"elastic-operator-84bd846658-vf92k\" (UID: \"d0f129d2-684b-4418-82e3-78463b8aadd4\") " pod="service-telemetry/elastic-operator-84bd846658-vf92k" Feb 19 00:18:45 crc kubenswrapper[4825]: I0219 00:18:45.994916 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-84bd846658-vf92k" Feb 19 00:18:53 crc kubenswrapper[4825]: I0219 00:18:53.776158 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-84bd846658-vf92k"] Feb 19 00:18:54 crc kubenswrapper[4825]: I0219 00:18:54.399986 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-84bd846658-vf92k" event={"ID":"d0f129d2-684b-4418-82e3-78463b8aadd4","Type":"ContainerStarted","Data":"a65d5de6ce991d5e7fe80c38bf1668d2fad40207d8bf2cf7a2aa80aa1e70d070"} Feb 19 00:18:54 crc kubenswrapper[4825]: I0219 00:18:54.403000 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-488k5" event={"ID":"5f078aef-685e-4fd4-ab75-d23f8f1cc185","Type":"ContainerStarted","Data":"a1c0900edfbbb76e0a5ff6501a24717c920435456aa16f29d84eca6ade26cbca"} Feb 19 00:18:54 crc kubenswrapper[4825]: I0219 00:18:54.407264 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-l52fp" event={"ID":"d8b8a9db-14c5-4cbe-8673-3ac90c2b6749","Type":"ContainerStarted","Data":"84c81464d99d800c82798d26a7d2264e19f839e9bdfe2309b2bf571a4256e4a3"} Feb 19 00:18:54 crc kubenswrapper[4825]: I0219 00:18:54.410075 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-g8xlc" event={"ID":"e4481c24-6ff2-4e86-8889-8910cb81f08b","Type":"ContainerStarted","Data":"21c585a3ce4b36677a5dd721e57ca3e25c9e9d023d65f55c1fcd187750610be0"} Feb 19 00:18:54 crc kubenswrapper[4825]: I0219 00:18:54.410289 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-g8xlc" Feb 19 00:18:54 crc kubenswrapper[4825]: I0219 00:18:54.412906 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-kkc6r" event={"ID":"6608c2fd-be9c-4cb3-93a6-1dcdf8da8555","Type":"ContainerStarted","Data":"35dee0a38f033b7549cea3da94b5586c1ef775d22a73da959ad7b85ac47befc3"} Feb 19 00:18:54 crc kubenswrapper[4825]: I0219 00:18:54.415875 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-v5f9p" event={"ID":"af582d80-5f92-4bf1-9d5e-44ade090c8f9","Type":"ContainerStarted","Data":"4035cbd21ec7575c2c722f80d6e783d0ffb0d20c261be81303fe4e9691179ce1"} Feb 19 00:18:54 crc kubenswrapper[4825]: I0219 00:18:54.416613 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-v5f9p" Feb 19 00:18:54 crc kubenswrapper[4825]: I0219 00:18:54.431737 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-488k5" podStartSLOduration=2.362761537 podStartE2EDuration="13.4317096s" podCreationTimestamp="2026-02-19 00:18:41 +0000 UTC" firstStartedPulling="2026-02-19 00:18:42.375819121 +0000 UTC m=+668.066785168" lastFinishedPulling="2026-02-19 00:18:53.444767184 +0000 UTC m=+679.135733231" observedRunningTime="2026-02-19 00:18:54.428999088 +0000 UTC m=+680.119965145" watchObservedRunningTime="2026-02-19 00:18:54.4317096 +0000 UTC m=+680.122675647" Feb 19 00:18:54 crc kubenswrapper[4825]: I0219 00:18:54.461779 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57879d559c-kkc6r" podStartSLOduration=2.226274899 podStartE2EDuration="13.461744591s" podCreationTimestamp="2026-02-19 00:18:41 +0000 UTC" firstStartedPulling="2026-02-19 00:18:42.21484962 +0000 UTC m=+667.905815677" lastFinishedPulling="2026-02-19 00:18:53.450319322 +0000 UTC m=+679.141285369" observedRunningTime="2026-02-19 00:18:54.453927633 +0000 UTC m=+680.144893700" watchObservedRunningTime="2026-02-19 00:18:54.461744591 +0000 UTC m=+680.152710638" Feb 19 00:18:54 crc kubenswrapper[4825]: I0219 00:18:54.487966 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-v5f9p" podStartSLOduration=2.580659198 podStartE2EDuration="13.487935649s" podCreationTimestamp="2026-02-19 00:18:41 +0000 UTC" firstStartedPulling="2026-02-19 00:18:42.537507193 +0000 UTC m=+668.228473230" lastFinishedPulling="2026-02-19 00:18:53.444783634 +0000 UTC m=+679.135749681" observedRunningTime="2026-02-19 00:18:54.478716414 +0000 UTC m=+680.169682461" watchObservedRunningTime="2026-02-19 00:18:54.487935649 +0000 UTC m=+680.178901686" Feb 19 00:18:54 crc kubenswrapper[4825]: I0219 00:18:54.489372 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-g8xlc" Feb 19 00:18:54 crc kubenswrapper[4825]: I0219 00:18:54.510059 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-g8xlc" podStartSLOduration=2.351276493 podStartE2EDuration="13.510017718s" podCreationTimestamp="2026-02-19 00:18:41 +0000 UTC" firstStartedPulling="2026-02-19 00:18:42.352297395 +0000 UTC m=+668.043263442" lastFinishedPulling="2026-02-19 00:18:53.51103862 +0000 UTC m=+679.202004667" observedRunningTime="2026-02-19 00:18:54.507035169 +0000 UTC m=+680.198001226" watchObservedRunningTime="2026-02-19 00:18:54.510017718 +0000 UTC m=+680.200983765" Feb 19 00:18:54 crc kubenswrapper[4825]: I0219 00:18:54.532362 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-l52fp" podStartSLOduration=2.129229502 podStartE2EDuration="13.531963744s" podCreationTimestamp="2026-02-19 00:18:41 +0000 UTC" firstStartedPulling="2026-02-19 00:18:41.954685782 +0000 UTC m=+667.645651829" lastFinishedPulling="2026-02-19 00:18:53.357420024 +0000 UTC m=+679.048386071" observedRunningTime="2026-02-19 00:18:54.527190366 +0000 UTC m=+680.218156413" watchObservedRunningTime="2026-02-19 00:18:54.531963744 +0000 UTC m=+680.222929791" Feb 19 00:18:57 crc kubenswrapper[4825]: I0219 00:18:57.435946 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-84bd846658-vf92k" event={"ID":"d0f129d2-684b-4418-82e3-78463b8aadd4","Type":"ContainerStarted","Data":"b00de7542f5589ccf0404702b7d42b32142b519a9ff27a43a8272690cb233bfe"} Feb 19 00:18:57 crc kubenswrapper[4825]: I0219 00:18:57.491077 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-84bd846658-vf92k" podStartSLOduration=9.381111911 podStartE2EDuration="12.491059337s" podCreationTimestamp="2026-02-19 00:18:45 +0000 UTC" firstStartedPulling="2026-02-19 00:18:53.793066901 +0000 UTC m=+679.484032948" lastFinishedPulling="2026-02-19 00:18:56.903014327 +0000 UTC m=+682.593980374" observedRunningTime="2026-02-19 00:18:57.474947227 +0000 UTC m=+683.165913284" watchObservedRunningTime="2026-02-19 00:18:57.491059337 +0000 UTC m=+683.182025384" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.690379 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.691649 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.698963 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.704195 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.704463 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.704491 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.704488 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-4jsvr" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.704616 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.704693 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.705230 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.706859 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.707484 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.707546 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.707594 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.707616 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.707757 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.707865 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.707901 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.707931 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/701b0694-d308-4569-823a-5848cfa5fea4-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.708033 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.708077 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.708206 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.708254 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.708282 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.708322 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.708403 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.731497 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.809574 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.809672 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.809700 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/701b0694-d308-4569-823a-5848cfa5fea4-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.810946 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.810982 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.811025 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.811053 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.811078 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.811104 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.811137 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.811175 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.811196 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.811229 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.811248 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.811271 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.825927 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.826709 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/701b0694-d308-4569-823a-5848cfa5fea4-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.827164 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.831262 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.831791 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.831776 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.832070 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.835658 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.836406 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.836864 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.837046 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.838011 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.854058 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/701b0694-d308-4569-823a-5848cfa5fea4-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.860613 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:59 crc kubenswrapper[4825]: I0219 00:18:59.860947 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/701b0694-d308-4569-823a-5848cfa5fea4-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"701b0694-d308-4569-823a-5848cfa5fea4\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:19:00 crc kubenswrapper[4825]: I0219 00:19:00.012600 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:19:00 crc kubenswrapper[4825]: I0219 00:19:00.320742 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:19:00 crc kubenswrapper[4825]: W0219 00:19:00.325454 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod701b0694_d308_4569_823a_5848cfa5fea4.slice/crio-83d14c027006846e2171129b704b43cc361a2e330c840ee63705d921e2a9f458 WatchSource:0}: Error finding container 83d14c027006846e2171129b704b43cc361a2e330c840ee63705d921e2a9f458: Status 404 returned error can't find the container with id 83d14c027006846e2171129b704b43cc361a2e330c840ee63705d921e2a9f458 Feb 19 00:19:00 crc kubenswrapper[4825]: I0219 00:19:00.685283 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-hnz9k"] Feb 19 00:19:00 crc kubenswrapper[4825]: I0219 00:19:00.686221 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-hnz9k" Feb 19 00:19:00 crc kubenswrapper[4825]: I0219 00:19:00.689436 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 19 00:19:00 crc kubenswrapper[4825]: I0219 00:19:00.689694 4825 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-t29kh" Feb 19 00:19:00 crc kubenswrapper[4825]: I0219 00:19:00.689883 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 19 00:19:00 crc kubenswrapper[4825]: I0219 00:19:00.701067 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-hnz9k"] Feb 19 00:19:00 crc kubenswrapper[4825]: I0219 00:19:00.736748 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8784e930-51cc-41ac-8830-7b39280b0a87-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-hnz9k\" (UID: \"8784e930-51cc-41ac-8830-7b39280b0a87\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-hnz9k" Feb 19 00:19:00 crc kubenswrapper[4825]: I0219 00:19:00.736869 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzlz6\" (UniqueName: \"kubernetes.io/projected/8784e930-51cc-41ac-8830-7b39280b0a87-kube-api-access-mzlz6\") pod \"cert-manager-operator-controller-manager-5586865c96-hnz9k\" (UID: \"8784e930-51cc-41ac-8830-7b39280b0a87\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-hnz9k" Feb 19 00:19:00 crc kubenswrapper[4825]: I0219 00:19:00.838110 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8784e930-51cc-41ac-8830-7b39280b0a87-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-hnz9k\" (UID: \"8784e930-51cc-41ac-8830-7b39280b0a87\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-hnz9k" Feb 19 00:19:00 crc kubenswrapper[4825]: I0219 00:19:00.838246 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzlz6\" (UniqueName: \"kubernetes.io/projected/8784e930-51cc-41ac-8830-7b39280b0a87-kube-api-access-mzlz6\") pod \"cert-manager-operator-controller-manager-5586865c96-hnz9k\" (UID: \"8784e930-51cc-41ac-8830-7b39280b0a87\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-hnz9k" Feb 19 00:19:00 crc kubenswrapper[4825]: I0219 00:19:00.839177 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8784e930-51cc-41ac-8830-7b39280b0a87-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-hnz9k\" (UID: \"8784e930-51cc-41ac-8830-7b39280b0a87\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-hnz9k" Feb 19 00:19:00 crc kubenswrapper[4825]: I0219 00:19:00.887657 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzlz6\" (UniqueName: \"kubernetes.io/projected/8784e930-51cc-41ac-8830-7b39280b0a87-kube-api-access-mzlz6\") pod \"cert-manager-operator-controller-manager-5586865c96-hnz9k\" (UID: \"8784e930-51cc-41ac-8830-7b39280b0a87\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-hnz9k" Feb 19 00:19:01 crc kubenswrapper[4825]: I0219 00:19:01.003082 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-hnz9k" Feb 19 00:19:01 crc kubenswrapper[4825]: I0219 00:19:01.145084 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"701b0694-d308-4569-823a-5848cfa5fea4","Type":"ContainerStarted","Data":"83d14c027006846e2171129b704b43cc361a2e330c840ee63705d921e2a9f458"} Feb 19 00:19:01 crc kubenswrapper[4825]: I0219 00:19:01.363712 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-hnz9k"] Feb 19 00:19:01 crc kubenswrapper[4825]: W0219 00:19:01.395404 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8784e930_51cc_41ac_8830_7b39280b0a87.slice/crio-7ee9e4971ab004fb2d263d0fdf2b3fc6dbe8fcc0e6e1cc6f565dea9b413c85e9 WatchSource:0}: Error finding container 7ee9e4971ab004fb2d263d0fdf2b3fc6dbe8fcc0e6e1cc6f565dea9b413c85e9: Status 404 returned error can't find the container with id 7ee9e4971ab004fb2d263d0fdf2b3fc6dbe8fcc0e6e1cc6f565dea9b413c85e9 Feb 19 00:19:02 crc kubenswrapper[4825]: I0219 00:19:02.075629 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-v5f9p" Feb 19 00:19:02 crc kubenswrapper[4825]: I0219 00:19:02.164623 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-hnz9k" event={"ID":"8784e930-51cc-41ac-8830-7b39280b0a87","Type":"ContainerStarted","Data":"7ee9e4971ab004fb2d263d0fdf2b3fc6dbe8fcc0e6e1cc6f565dea9b413c85e9"} Feb 19 00:19:10 crc kubenswrapper[4825]: I0219 00:19:10.230271 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-hnz9k" event={"ID":"8784e930-51cc-41ac-8830-7b39280b0a87","Type":"ContainerStarted","Data":"868f632b666dc0250bb3e7b4f1bbdf984634973c6125f7621aeaf585735de55b"} Feb 19 00:19:10 crc kubenswrapper[4825]: I0219 00:19:10.253260 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-hnz9k" podStartSLOduration=2.048656994 podStartE2EDuration="10.253236577s" podCreationTimestamp="2026-02-19 00:19:00 +0000 UTC" firstStartedPulling="2026-02-19 00:19:01.402086403 +0000 UTC m=+687.093052450" lastFinishedPulling="2026-02-19 00:19:09.606665986 +0000 UTC m=+695.297632033" observedRunningTime="2026-02-19 00:19:10.248931752 +0000 UTC m=+695.939897799" watchObservedRunningTime="2026-02-19 00:19:10.253236577 +0000 UTC m=+695.944202624" Feb 19 00:19:14 crc kubenswrapper[4825]: I0219 00:19:14.155930 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-dnfkd"] Feb 19 00:19:14 crc kubenswrapper[4825]: I0219 00:19:14.159471 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-dnfkd"] Feb 19 00:19:14 crc kubenswrapper[4825]: I0219 00:19:14.159700 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-dnfkd" Feb 19 00:19:14 crc kubenswrapper[4825]: I0219 00:19:14.179852 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 19 00:19:14 crc kubenswrapper[4825]: I0219 00:19:14.180757 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 19 00:19:14 crc kubenswrapper[4825]: I0219 00:19:14.180769 4825 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-t62gg" Feb 19 00:19:14 crc kubenswrapper[4825]: I0219 00:19:14.264005 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d67e5607-ad75-4cca-9aa6-db360e6334b1-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-dnfkd\" (UID: \"d67e5607-ad75-4cca-9aa6-db360e6334b1\") " pod="cert-manager/cert-manager-webhook-6888856db4-dnfkd" Feb 19 00:19:14 crc kubenswrapper[4825]: I0219 00:19:14.264060 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7mhb\" (UniqueName: \"kubernetes.io/projected/d67e5607-ad75-4cca-9aa6-db360e6334b1-kube-api-access-z7mhb\") pod \"cert-manager-webhook-6888856db4-dnfkd\" (UID: \"d67e5607-ad75-4cca-9aa6-db360e6334b1\") " pod="cert-manager/cert-manager-webhook-6888856db4-dnfkd" Feb 19 00:19:14 crc kubenswrapper[4825]: I0219 00:19:14.365423 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d67e5607-ad75-4cca-9aa6-db360e6334b1-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-dnfkd\" (UID: \"d67e5607-ad75-4cca-9aa6-db360e6334b1\") " pod="cert-manager/cert-manager-webhook-6888856db4-dnfkd" Feb 19 00:19:14 crc kubenswrapper[4825]: I0219 00:19:14.365961 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7mhb\" (UniqueName: \"kubernetes.io/projected/d67e5607-ad75-4cca-9aa6-db360e6334b1-kube-api-access-z7mhb\") pod \"cert-manager-webhook-6888856db4-dnfkd\" (UID: \"d67e5607-ad75-4cca-9aa6-db360e6334b1\") " pod="cert-manager/cert-manager-webhook-6888856db4-dnfkd" Feb 19 00:19:14 crc kubenswrapper[4825]: I0219 00:19:14.388533 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d67e5607-ad75-4cca-9aa6-db360e6334b1-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-dnfkd\" (UID: \"d67e5607-ad75-4cca-9aa6-db360e6334b1\") " pod="cert-manager/cert-manager-webhook-6888856db4-dnfkd" Feb 19 00:19:14 crc kubenswrapper[4825]: I0219 00:19:14.396570 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7mhb\" (UniqueName: \"kubernetes.io/projected/d67e5607-ad75-4cca-9aa6-db360e6334b1-kube-api-access-z7mhb\") pod \"cert-manager-webhook-6888856db4-dnfkd\" (UID: \"d67e5607-ad75-4cca-9aa6-db360e6334b1\") " pod="cert-manager/cert-manager-webhook-6888856db4-dnfkd" Feb 19 00:19:14 crc kubenswrapper[4825]: I0219 00:19:14.508543 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-dnfkd" Feb 19 00:19:16 crc kubenswrapper[4825]: I0219 00:19:16.120658 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-x2dr2"] Feb 19 00:19:16 crc kubenswrapper[4825]: I0219 00:19:16.122102 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-x2dr2" Feb 19 00:19:16 crc kubenswrapper[4825]: I0219 00:19:16.123453 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-x2dr2"] Feb 19 00:19:16 crc kubenswrapper[4825]: I0219 00:19:16.126090 4825 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-66nv9" Feb 19 00:19:16 crc kubenswrapper[4825]: I0219 00:19:16.197861 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrflz\" (UniqueName: \"kubernetes.io/projected/c6835fb4-5c79-47fa-8a6f-a827d96e1363-kube-api-access-wrflz\") pod \"cert-manager-cainjector-5545bd876-x2dr2\" (UID: \"c6835fb4-5c79-47fa-8a6f-a827d96e1363\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x2dr2" Feb 19 00:19:16 crc kubenswrapper[4825]: I0219 00:19:16.198240 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6835fb4-5c79-47fa-8a6f-a827d96e1363-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-x2dr2\" (UID: \"c6835fb4-5c79-47fa-8a6f-a827d96e1363\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x2dr2" Feb 19 00:19:16 crc kubenswrapper[4825]: I0219 00:19:16.300625 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6835fb4-5c79-47fa-8a6f-a827d96e1363-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-x2dr2\" (UID: \"c6835fb4-5c79-47fa-8a6f-a827d96e1363\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x2dr2" Feb 19 00:19:16 crc kubenswrapper[4825]: I0219 00:19:16.300718 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrflz\" (UniqueName: \"kubernetes.io/projected/c6835fb4-5c79-47fa-8a6f-a827d96e1363-kube-api-access-wrflz\") pod \"cert-manager-cainjector-5545bd876-x2dr2\" (UID: \"c6835fb4-5c79-47fa-8a6f-a827d96e1363\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x2dr2" Feb 19 00:19:16 crc kubenswrapper[4825]: I0219 00:19:16.324775 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrflz\" (UniqueName: \"kubernetes.io/projected/c6835fb4-5c79-47fa-8a6f-a827d96e1363-kube-api-access-wrflz\") pod \"cert-manager-cainjector-5545bd876-x2dr2\" (UID: \"c6835fb4-5c79-47fa-8a6f-a827d96e1363\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x2dr2" Feb 19 00:19:16 crc kubenswrapper[4825]: I0219 00:19:16.327177 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c6835fb4-5c79-47fa-8a6f-a827d96e1363-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-x2dr2\" (UID: \"c6835fb4-5c79-47fa-8a6f-a827d96e1363\") " pod="cert-manager/cert-manager-cainjector-5545bd876-x2dr2" Feb 19 00:19:16 crc kubenswrapper[4825]: I0219 00:19:16.449623 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-x2dr2" Feb 19 00:19:17 crc kubenswrapper[4825]: I0219 00:19:17.656491 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-x2dr2"] Feb 19 00:19:18 crc kubenswrapper[4825]: I0219 00:19:18.179358 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-dnfkd"] Feb 19 00:19:18 crc kubenswrapper[4825]: W0219 00:19:18.200170 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd67e5607_ad75_4cca_9aa6_db360e6334b1.slice/crio-13b22c13c82f168857fff8f49b2b78b9b2a2f94f595469c3be0343adf64bf013 WatchSource:0}: Error finding container 13b22c13c82f168857fff8f49b2b78b9b2a2f94f595469c3be0343adf64bf013: Status 404 returned error can't find the container with id 13b22c13c82f168857fff8f49b2b78b9b2a2f94f595469c3be0343adf64bf013 Feb 19 00:19:18 crc kubenswrapper[4825]: I0219 00:19:18.317558 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"701b0694-d308-4569-823a-5848cfa5fea4","Type":"ContainerStarted","Data":"e58d1e6c116c859f42073734ae6fec0d9d51866c387aa2b04fafbc8a0d801843"} Feb 19 00:19:18 crc kubenswrapper[4825]: I0219 00:19:18.319184 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-dnfkd" event={"ID":"d67e5607-ad75-4cca-9aa6-db360e6334b1","Type":"ContainerStarted","Data":"13b22c13c82f168857fff8f49b2b78b9b2a2f94f595469c3be0343adf64bf013"} Feb 19 00:19:18 crc kubenswrapper[4825]: I0219 00:19:18.320123 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-x2dr2" event={"ID":"c6835fb4-5c79-47fa-8a6f-a827d96e1363","Type":"ContainerStarted","Data":"cff008a78fca3ee8a96cb12e5d81ee111c4b69597439a0f39ffda5b6661d35ee"} Feb 19 00:19:18 crc kubenswrapper[4825]: I0219 00:19:18.512048 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:19:18 crc kubenswrapper[4825]: I0219 00:19:18.547441 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:19:20 crc kubenswrapper[4825]: I0219 00:19:20.333571 4825 generic.go:334] "Generic (PLEG): container finished" podID="701b0694-d308-4569-823a-5848cfa5fea4" containerID="e58d1e6c116c859f42073734ae6fec0d9d51866c387aa2b04fafbc8a0d801843" exitCode=0 Feb 19 00:19:20 crc kubenswrapper[4825]: I0219 00:19:20.334023 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"701b0694-d308-4569-823a-5848cfa5fea4","Type":"ContainerDied","Data":"e58d1e6c116c859f42073734ae6fec0d9d51866c387aa2b04fafbc8a0d801843"} Feb 19 00:19:22 crc kubenswrapper[4825]: I0219 00:19:22.629221 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-b7hpc"] Feb 19 00:19:22 crc kubenswrapper[4825]: I0219 00:19:22.631077 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-b7hpc" Feb 19 00:19:22 crc kubenswrapper[4825]: I0219 00:19:22.633424 4825 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-mjr6l" Feb 19 00:19:22 crc kubenswrapper[4825]: I0219 00:19:22.643265 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-b7hpc"] Feb 19 00:19:22 crc kubenswrapper[4825]: I0219 00:19:22.700427 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bk49\" (UniqueName: \"kubernetes.io/projected/16f3958c-d237-4741-97d0-332277c0e53a-kube-api-access-9bk49\") pod \"cert-manager-545d4d4674-b7hpc\" (UID: \"16f3958c-d237-4741-97d0-332277c0e53a\") " pod="cert-manager/cert-manager-545d4d4674-b7hpc" Feb 19 00:19:22 crc kubenswrapper[4825]: I0219 00:19:22.700489 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16f3958c-d237-4741-97d0-332277c0e53a-bound-sa-token\") pod \"cert-manager-545d4d4674-b7hpc\" (UID: \"16f3958c-d237-4741-97d0-332277c0e53a\") " pod="cert-manager/cert-manager-545d4d4674-b7hpc" Feb 19 00:19:22 crc kubenswrapper[4825]: I0219 00:19:22.801279 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16f3958c-d237-4741-97d0-332277c0e53a-bound-sa-token\") pod \"cert-manager-545d4d4674-b7hpc\" (UID: \"16f3958c-d237-4741-97d0-332277c0e53a\") " pod="cert-manager/cert-manager-545d4d4674-b7hpc" Feb 19 00:19:22 crc kubenswrapper[4825]: I0219 00:19:22.801408 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bk49\" (UniqueName: \"kubernetes.io/projected/16f3958c-d237-4741-97d0-332277c0e53a-kube-api-access-9bk49\") pod \"cert-manager-545d4d4674-b7hpc\" (UID: \"16f3958c-d237-4741-97d0-332277c0e53a\") " pod="cert-manager/cert-manager-545d4d4674-b7hpc" Feb 19 00:19:22 crc kubenswrapper[4825]: I0219 00:19:22.824177 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16f3958c-d237-4741-97d0-332277c0e53a-bound-sa-token\") pod \"cert-manager-545d4d4674-b7hpc\" (UID: \"16f3958c-d237-4741-97d0-332277c0e53a\") " pod="cert-manager/cert-manager-545d4d4674-b7hpc" Feb 19 00:19:22 crc kubenswrapper[4825]: I0219 00:19:22.824541 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bk49\" (UniqueName: \"kubernetes.io/projected/16f3958c-d237-4741-97d0-332277c0e53a-kube-api-access-9bk49\") pod \"cert-manager-545d4d4674-b7hpc\" (UID: \"16f3958c-d237-4741-97d0-332277c0e53a\") " pod="cert-manager/cert-manager-545d4d4674-b7hpc" Feb 19 00:19:22 crc kubenswrapper[4825]: I0219 00:19:22.951286 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-b7hpc" Feb 19 00:19:23 crc kubenswrapper[4825]: I0219 00:19:23.409583 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-b7hpc"] Feb 19 00:19:23 crc kubenswrapper[4825]: W0219 00:19:23.416645 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16f3958c_d237_4741_97d0_332277c0e53a.slice/crio-b981f90babbbf251bf276c93f4a3a39e17b87c6684c6fa612b183232c0ae727a WatchSource:0}: Error finding container b981f90babbbf251bf276c93f4a3a39e17b87c6684c6fa612b183232c0ae727a: Status 404 returned error can't find the container with id b981f90babbbf251bf276c93f4a3a39e17b87c6684c6fa612b183232c0ae727a Feb 19 00:19:24 crc kubenswrapper[4825]: I0219 00:19:24.360685 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-b7hpc" event={"ID":"16f3958c-d237-4741-97d0-332277c0e53a","Type":"ContainerStarted","Data":"b981f90babbbf251bf276c93f4a3a39e17b87c6684c6fa612b183232c0ae727a"} Feb 19 00:19:26 crc kubenswrapper[4825]: I0219 00:19:26.381026 4825 generic.go:334] "Generic (PLEG): container finished" podID="701b0694-d308-4569-823a-5848cfa5fea4" containerID="ce80144ed45a1a2af2814ec19ff709354bcb0e491fb8945db26fa5e698bfc8fd" exitCode=0 Feb 19 00:19:26 crc kubenswrapper[4825]: I0219 00:19:26.381099 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"701b0694-d308-4569-823a-5848cfa5fea4","Type":"ContainerDied","Data":"ce80144ed45a1a2af2814ec19ff709354bcb0e491fb8945db26fa5e698bfc8fd"} Feb 19 00:19:28 crc kubenswrapper[4825]: I0219 00:19:28.395912 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"701b0694-d308-4569-823a-5848cfa5fea4","Type":"ContainerStarted","Data":"b1216a487bc705072b43db19b33533048ca2be73bc5250250b7d2a5475098ab1"} Feb 19 00:19:28 crc kubenswrapper[4825]: I0219 00:19:28.397750 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:19:28 crc kubenswrapper[4825]: I0219 00:19:28.400976 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-dnfkd" event={"ID":"d67e5607-ad75-4cca-9aa6-db360e6334b1","Type":"ContainerStarted","Data":"44883690a3730e3689901f5f8f76cfb9d4ba8e3ac3643d3dda4d64dc8242d97a"} Feb 19 00:19:28 crc kubenswrapper[4825]: I0219 00:19:28.401433 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-dnfkd" Feb 19 00:19:28 crc kubenswrapper[4825]: I0219 00:19:28.405614 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-b7hpc" event={"ID":"16f3958c-d237-4741-97d0-332277c0e53a","Type":"ContainerStarted","Data":"47b316ef0df59531fa835573f63b88eadbc145fd6fb47cb30a94b3d92e270d0f"} Feb 19 00:19:28 crc kubenswrapper[4825]: I0219 00:19:28.406921 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-x2dr2" event={"ID":"c6835fb4-5c79-47fa-8a6f-a827d96e1363","Type":"ContainerStarted","Data":"f7a747b13162a0c00ca9242db0a77df1c3d5857f018fbc0a8fbe7737d5e13f37"} Feb 19 00:19:28 crc kubenswrapper[4825]: I0219 00:19:28.458497 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-dnfkd" podStartSLOduration=4.571344646 podStartE2EDuration="14.458476136s" podCreationTimestamp="2026-02-19 00:19:14 +0000 UTC" firstStartedPulling="2026-02-19 00:19:18.202890412 +0000 UTC m=+703.893856459" lastFinishedPulling="2026-02-19 00:19:28.090021902 +0000 UTC m=+713.780987949" observedRunningTime="2026-02-19 00:19:28.45710579 +0000 UTC m=+714.148071927" watchObservedRunningTime="2026-02-19 00:19:28.458476136 +0000 UTC m=+714.149442183" Feb 19 00:19:28 crc kubenswrapper[4825]: I0219 00:19:28.458789 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=11.67860589 podStartE2EDuration="29.458784444s" podCreationTimestamp="2026-02-19 00:18:59 +0000 UTC" firstStartedPulling="2026-02-19 00:19:00.327341615 +0000 UTC m=+686.018307682" lastFinishedPulling="2026-02-19 00:19:18.107520189 +0000 UTC m=+703.798486236" observedRunningTime="2026-02-19 00:19:28.439689696 +0000 UTC m=+714.130655753" watchObservedRunningTime="2026-02-19 00:19:28.458784444 +0000 UTC m=+714.149750491" Feb 19 00:19:28 crc kubenswrapper[4825]: I0219 00:19:28.516547 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-b7hpc" podStartSLOduration=1.845850661 podStartE2EDuration="6.516521734s" podCreationTimestamp="2026-02-19 00:19:22 +0000 UTC" firstStartedPulling="2026-02-19 00:19:23.419884703 +0000 UTC m=+709.110850750" lastFinishedPulling="2026-02-19 00:19:28.090555786 +0000 UTC m=+713.781521823" observedRunningTime="2026-02-19 00:19:28.515492167 +0000 UTC m=+714.206458214" watchObservedRunningTime="2026-02-19 00:19:28.516521734 +0000 UTC m=+714.207487781" Feb 19 00:19:28 crc kubenswrapper[4825]: I0219 00:19:28.518596 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-x2dr2" podStartSLOduration=2.318777063 podStartE2EDuration="12.518582559s" podCreationTimestamp="2026-02-19 00:19:16 +0000 UTC" firstStartedPulling="2026-02-19 00:19:17.885854499 +0000 UTC m=+703.576820546" lastFinishedPulling="2026-02-19 00:19:28.085659995 +0000 UTC m=+713.776626042" observedRunningTime="2026-02-19 00:19:28.481796548 +0000 UTC m=+714.172762605" watchObservedRunningTime="2026-02-19 00:19:28.518582559 +0000 UTC m=+714.209548606" Feb 19 00:19:34 crc kubenswrapper[4825]: I0219 00:19:34.511082 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-dnfkd" Feb 19 00:19:40 crc kubenswrapper[4825]: I0219 00:19:40.110269 4825 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="701b0694-d308-4569-823a-5848cfa5fea4" containerName="elasticsearch" probeResult="failure" output=< Feb 19 00:19:40 crc kubenswrapper[4825]: {"timestamp": "2026-02-19T00:19:40+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 19 00:19:40 crc kubenswrapper[4825]: > Feb 19 00:19:45 crc kubenswrapper[4825]: I0219 00:19:45.718077 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.323700 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.325933 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.330808 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.330912 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-sys-config" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.331349 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-global-ca" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.331435 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-ca" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.331663 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-dldcm" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.343674 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.397825 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.398252 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.398367 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.398477 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.398645 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.398763 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.398884 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.399065 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl7xp\" (UniqueName: \"kubernetes.io/projected/b92fbe88-557b-4a12-9796-09cd67dca3dd-kube-api-access-jl7xp\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.399193 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-dldcm-push\" (UniqueName: \"kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-builder-dockercfg-dldcm-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.399414 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b92fbe88-557b-4a12-9796-09cd67dca3dd-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.399521 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-dldcm-pull\" (UniqueName: \"kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-builder-dockercfg-dldcm-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.399625 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.399666 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b92fbe88-557b-4a12-9796-09cd67dca3dd-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.500955 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.501011 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.501032 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.501071 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.501092 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.501114 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.501140 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl7xp\" (UniqueName: \"kubernetes.io/projected/b92fbe88-557b-4a12-9796-09cd67dca3dd-kube-api-access-jl7xp\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.501169 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-dldcm-push\" (UniqueName: \"kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-builder-dockercfg-dldcm-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.501198 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b92fbe88-557b-4a12-9796-09cd67dca3dd-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.501215 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-dldcm-pull\" (UniqueName: \"kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-builder-dockercfg-dldcm-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.501251 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b92fbe88-557b-4a12-9796-09cd67dca3dd-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.501266 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.501288 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.502106 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b92fbe88-557b-4a12-9796-09cd67dca3dd-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.502744 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b92fbe88-557b-4a12-9796-09cd67dca3dd-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.502861 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.503060 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.503403 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.504699 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.504700 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.504749 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.504789 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.508265 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-dldcm-push\" (UniqueName: \"kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-builder-dockercfg-dldcm-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.508289 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-dldcm-pull\" (UniqueName: \"kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-builder-dockercfg-dldcm-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.521509 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.537891 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl7xp\" (UniqueName: \"kubernetes.io/projected/b92fbe88-557b-4a12-9796-09cd67dca3dd-kube-api-access-jl7xp\") pod \"service-telemetry-framework-index-1-build\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:48 crc kubenswrapper[4825]: I0219 00:19:48.655690 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:49 crc kubenswrapper[4825]: I0219 00:19:49.126338 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 19 00:19:49 crc kubenswrapper[4825]: I0219 00:19:49.541155 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"b92fbe88-557b-4a12-9796-09cd67dca3dd","Type":"ContainerStarted","Data":"ada36d47eb83bccb1ee38ac89737a8438662cb2ff805464e07f2ddf2080078a1"} Feb 19 00:19:54 crc kubenswrapper[4825]: I0219 00:19:54.588821 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"b92fbe88-557b-4a12-9796-09cd67dca3dd","Type":"ContainerStarted","Data":"2c0b9f72fdae9f5c2a697fb9177749b43ccabdfa6b245cf7fc96b681987c198e"} Feb 19 00:19:54 crc kubenswrapper[4825]: E0219 00:19:54.655955 4825 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5152781452663951669, SKID=, AKID=B4:A3:38:92:66:66:E9:EC:9B:7F:83:C4:9F:9E:C5:0A:1E:3D:ED:C4 failed: x509: certificate signed by unknown authority" Feb 19 00:19:55 crc kubenswrapper[4825]: I0219 00:19:55.685862 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 19 00:19:56 crc kubenswrapper[4825]: I0219 00:19:56.602100 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-1-build" podUID="b92fbe88-557b-4a12-9796-09cd67dca3dd" containerName="git-clone" containerID="cri-o://2c0b9f72fdae9f5c2a697fb9177749b43ccabdfa6b245cf7fc96b681987c198e" gracePeriod=30 Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.029338 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_b92fbe88-557b-4a12-9796-09cd67dca3dd/git-clone/0.log" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.029840 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.162823 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-system-configs\") pod \"b92fbe88-557b-4a12-9796-09cd67dca3dd\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.162882 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-buildworkdir\") pod \"b92fbe88-557b-4a12-9796-09cd67dca3dd\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.162937 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-container-storage-root\") pod \"b92fbe88-557b-4a12-9796-09cd67dca3dd\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.163001 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"b92fbe88-557b-4a12-9796-09cd67dca3dd\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.163042 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b92fbe88-557b-4a12-9796-09cd67dca3dd-node-pullsecrets\") pod \"b92fbe88-557b-4a12-9796-09cd67dca3dd\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.163095 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b92fbe88-557b-4a12-9796-09cd67dca3dd-buildcachedir\") pod \"b92fbe88-557b-4a12-9796-09cd67dca3dd\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.163131 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-proxy-ca-bundles\") pod \"b92fbe88-557b-4a12-9796-09cd67dca3dd\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.163162 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-container-storage-run\") pod \"b92fbe88-557b-4a12-9796-09cd67dca3dd\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.163189 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-dldcm-push\" (UniqueName: \"kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-builder-dockercfg-dldcm-push\") pod \"b92fbe88-557b-4a12-9796-09cd67dca3dd\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.163247 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl7xp\" (UniqueName: \"kubernetes.io/projected/b92fbe88-557b-4a12-9796-09cd67dca3dd-kube-api-access-jl7xp\") pod \"b92fbe88-557b-4a12-9796-09cd67dca3dd\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.163306 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-blob-cache\") pod \"b92fbe88-557b-4a12-9796-09cd67dca3dd\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.163369 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-ca-bundles\") pod \"b92fbe88-557b-4a12-9796-09cd67dca3dd\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.163422 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-dldcm-pull\" (UniqueName: \"kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-builder-dockercfg-dldcm-pull\") pod \"b92fbe88-557b-4a12-9796-09cd67dca3dd\" (UID: \"b92fbe88-557b-4a12-9796-09cd67dca3dd\") " Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.163562 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b92fbe88-557b-4a12-9796-09cd67dca3dd" (UID: "b92fbe88-557b-4a12-9796-09cd67dca3dd"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.163862 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b92fbe88-557b-4a12-9796-09cd67dca3dd" (UID: "b92fbe88-557b-4a12-9796-09cd67dca3dd"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.163589 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b92fbe88-557b-4a12-9796-09cd67dca3dd" (UID: "b92fbe88-557b-4a12-9796-09cd67dca3dd"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.164060 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b92fbe88-557b-4a12-9796-09cd67dca3dd" (UID: "b92fbe88-557b-4a12-9796-09cd67dca3dd"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.164133 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b92fbe88-557b-4a12-9796-09cd67dca3dd-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b92fbe88-557b-4a12-9796-09cd67dca3dd" (UID: "b92fbe88-557b-4a12-9796-09cd67dca3dd"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.164163 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b92fbe88-557b-4a12-9796-09cd67dca3dd-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b92fbe88-557b-4a12-9796-09cd67dca3dd" (UID: "b92fbe88-557b-4a12-9796-09cd67dca3dd"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.164302 4825 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.164334 4825 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.164765 4825 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.164793 4825 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b92fbe88-557b-4a12-9796-09cd67dca3dd-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.164809 4825 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b92fbe88-557b-4a12-9796-09cd67dca3dd-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.164830 4825 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.164930 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b92fbe88-557b-4a12-9796-09cd67dca3dd" (UID: "b92fbe88-557b-4a12-9796-09cd67dca3dd"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.165096 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b92fbe88-557b-4a12-9796-09cd67dca3dd" (UID: "b92fbe88-557b-4a12-9796-09cd67dca3dd"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.166117 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b92fbe88-557b-4a12-9796-09cd67dca3dd" (UID: "b92fbe88-557b-4a12-9796-09cd67dca3dd"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.171224 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "b92fbe88-557b-4a12-9796-09cd67dca3dd" (UID: "b92fbe88-557b-4a12-9796-09cd67dca3dd"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.171552 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b92fbe88-557b-4a12-9796-09cd67dca3dd-kube-api-access-jl7xp" (OuterVolumeSpecName: "kube-api-access-jl7xp") pod "b92fbe88-557b-4a12-9796-09cd67dca3dd" (UID: "b92fbe88-557b-4a12-9796-09cd67dca3dd"). InnerVolumeSpecName "kube-api-access-jl7xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.172930 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-builder-dockercfg-dldcm-push" (OuterVolumeSpecName: "builder-dockercfg-dldcm-push") pod "b92fbe88-557b-4a12-9796-09cd67dca3dd" (UID: "b92fbe88-557b-4a12-9796-09cd67dca3dd"). InnerVolumeSpecName "builder-dockercfg-dldcm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.171989 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-builder-dockercfg-dldcm-pull" (OuterVolumeSpecName: "builder-dockercfg-dldcm-pull") pod "b92fbe88-557b-4a12-9796-09cd67dca3dd" (UID: "b92fbe88-557b-4a12-9796-09cd67dca3dd"). InnerVolumeSpecName "builder-dockercfg-dldcm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.266721 4825 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.266767 4825 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.266780 4825 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-dldcm-push\" (UniqueName: \"kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-builder-dockercfg-dldcm-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.266789 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl7xp\" (UniqueName: \"kubernetes.io/projected/b92fbe88-557b-4a12-9796-09cd67dca3dd-kube-api-access-jl7xp\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.266801 4825 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.266809 4825 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b92fbe88-557b-4a12-9796-09cd67dca3dd-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.266820 4825 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-dldcm-pull\" (UniqueName: \"kubernetes.io/secret/b92fbe88-557b-4a12-9796-09cd67dca3dd-builder-dockercfg-dldcm-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.615046 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_b92fbe88-557b-4a12-9796-09cd67dca3dd/git-clone/0.log" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.615141 4825 generic.go:334] "Generic (PLEG): container finished" podID="b92fbe88-557b-4a12-9796-09cd67dca3dd" containerID="2c0b9f72fdae9f5c2a697fb9177749b43ccabdfa6b245cf7fc96b681987c198e" exitCode=1 Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.615189 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"b92fbe88-557b-4a12-9796-09cd67dca3dd","Type":"ContainerDied","Data":"2c0b9f72fdae9f5c2a697fb9177749b43ccabdfa6b245cf7fc96b681987c198e"} Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.615230 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"b92fbe88-557b-4a12-9796-09cd67dca3dd","Type":"ContainerDied","Data":"ada36d47eb83bccb1ee38ac89737a8438662cb2ff805464e07f2ddf2080078a1"} Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.615278 4825 scope.go:117] "RemoveContainer" containerID="2c0b9f72fdae9f5c2a697fb9177749b43ccabdfa6b245cf7fc96b681987c198e" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.615278 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.653537 4825 scope.go:117] "RemoveContainer" containerID="2c0b9f72fdae9f5c2a697fb9177749b43ccabdfa6b245cf7fc96b681987c198e" Feb 19 00:19:57 crc kubenswrapper[4825]: E0219 00:19:57.654339 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0b9f72fdae9f5c2a697fb9177749b43ccabdfa6b245cf7fc96b681987c198e\": container with ID starting with 2c0b9f72fdae9f5c2a697fb9177749b43ccabdfa6b245cf7fc96b681987c198e not found: ID does not exist" containerID="2c0b9f72fdae9f5c2a697fb9177749b43ccabdfa6b245cf7fc96b681987c198e" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.654381 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0b9f72fdae9f5c2a697fb9177749b43ccabdfa6b245cf7fc96b681987c198e"} err="failed to get container status \"2c0b9f72fdae9f5c2a697fb9177749b43ccabdfa6b245cf7fc96b681987c198e\": rpc error: code = NotFound desc = could not find container \"2c0b9f72fdae9f5c2a697fb9177749b43ccabdfa6b245cf7fc96b681987c198e\": container with ID starting with 2c0b9f72fdae9f5c2a697fb9177749b43ccabdfa6b245cf7fc96b681987c198e not found: ID does not exist" Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.670093 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 19 00:19:57 crc kubenswrapper[4825]: I0219 00:19:57.674750 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Feb 19 00:19:59 crc kubenswrapper[4825]: I0219 00:19:59.078208 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b92fbe88-557b-4a12-9796-09cd67dca3dd" path="/var/lib/kubelet/pods/b92fbe88-557b-4a12-9796-09cd67dca3dd/volumes" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.142931 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 19 00:20:07 crc kubenswrapper[4825]: E0219 00:20:07.144121 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92fbe88-557b-4a12-9796-09cd67dca3dd" containerName="git-clone" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.144138 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92fbe88-557b-4a12-9796-09cd67dca3dd" containerName="git-clone" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.144293 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92fbe88-557b-4a12-9796-09cd67dca3dd" containerName="git-clone" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.145432 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.153081 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-2-global-ca" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.153429 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-2-ca" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.153608 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-2-sys-config" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.153814 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-dldcm" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.153831 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.171379 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.223332 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-dldcm-push\" (UniqueName: \"kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-builder-dockercfg-dldcm-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.223394 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/26ca51d5-7d94-4211-946b-75886190fd58-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.223434 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.223598 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-dldcm-pull\" (UniqueName: \"kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-builder-dockercfg-dldcm-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.223699 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.223751 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.223978 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.224164 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.224242 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/26ca51d5-7d94-4211-946b-75886190fd58-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.224315 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.224361 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.224450 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.224672 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxnxl\" (UniqueName: \"kubernetes.io/projected/26ca51d5-7d94-4211-946b-75886190fd58-kube-api-access-bxnxl\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.325993 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.326385 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/26ca51d5-7d94-4211-946b-75886190fd58-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.326481 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/26ca51d5-7d94-4211-946b-75886190fd58-buildcachedir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.326494 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.326624 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.326674 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.326768 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxnxl\" (UniqueName: \"kubernetes.io/projected/26ca51d5-7d94-4211-946b-75886190fd58-kube-api-access-bxnxl\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.326812 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-dldcm-push\" (UniqueName: \"kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-builder-dockercfg-dldcm-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.326842 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/26ca51d5-7d94-4211-946b-75886190fd58-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.326903 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.326939 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-dldcm-pull\" (UniqueName: \"kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-builder-dockercfg-dldcm-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.326974 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.327000 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.327014 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.327082 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.327182 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-build-blob-cache\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.327318 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/26ca51d5-7d94-4211-946b-75886190fd58-node-pullsecrets\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.327652 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-container-storage-root\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.327758 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-container-storage-run\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.327840 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-buildworkdir\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.328047 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-ca-bundles\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.328985 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-system-configs\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.336134 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.336241 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-dldcm-pull\" (UniqueName: \"kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-builder-dockercfg-dldcm-pull\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.337961 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-dldcm-push\" (UniqueName: \"kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-builder-dockercfg-dldcm-push\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.351684 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxnxl\" (UniqueName: \"kubernetes.io/projected/26ca51d5-7d94-4211-946b-75886190fd58-kube-api-access-bxnxl\") pod \"service-telemetry-framework-index-2-build\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.474877 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:07 crc kubenswrapper[4825]: I0219 00:20:07.722445 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 19 00:20:08 crc kubenswrapper[4825]: I0219 00:20:08.700008 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"26ca51d5-7d94-4211-946b-75886190fd58","Type":"ContainerStarted","Data":"5798036970e89a43b32eb228c9e6bc296b9cd701758096c77ca1fcbb7da6927c"} Feb 19 00:20:08 crc kubenswrapper[4825]: I0219 00:20:08.700563 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"26ca51d5-7d94-4211-946b-75886190fd58","Type":"ContainerStarted","Data":"a410a585ebc568529bc07e5c986032f22f811fb7efc72967c1745e467d832fa8"} Feb 19 00:20:08 crc kubenswrapper[4825]: E0219 00:20:08.772798 4825 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=5152781452663951669, SKID=, AKID=B4:A3:38:92:66:66:E9:EC:9B:7F:83:C4:9F:9E:C5:0A:1E:3D:ED:C4 failed: x509: certificate signed by unknown authority" Feb 19 00:20:09 crc kubenswrapper[4825]: I0219 00:20:09.811372 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 19 00:20:10 crc kubenswrapper[4825]: I0219 00:20:10.718269 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-framework-index-2-build" podUID="26ca51d5-7d94-4211-946b-75886190fd58" containerName="git-clone" containerID="cri-o://5798036970e89a43b32eb228c9e6bc296b9cd701758096c77ca1fcbb7da6927c" gracePeriod=30 Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.663139 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_26ca51d5-7d94-4211-946b-75886190fd58/git-clone/0.log" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.663646 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.742047 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-2-build_26ca51d5-7d94-4211-946b-75886190fd58/git-clone/0.log" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.742092 4825 generic.go:334] "Generic (PLEG): container finished" podID="26ca51d5-7d94-4211-946b-75886190fd58" containerID="5798036970e89a43b32eb228c9e6bc296b9cd701758096c77ca1fcbb7da6927c" exitCode=1 Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.742149 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"26ca51d5-7d94-4211-946b-75886190fd58","Type":"ContainerDied","Data":"5798036970e89a43b32eb228c9e6bc296b9cd701758096c77ca1fcbb7da6927c"} Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.742183 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-2-build" event={"ID":"26ca51d5-7d94-4211-946b-75886190fd58","Type":"ContainerDied","Data":"a410a585ebc568529bc07e5c986032f22f811fb7efc72967c1745e467d832fa8"} Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.742201 4825 scope.go:117] "RemoveContainer" containerID="5798036970e89a43b32eb228c9e6bc296b9cd701758096c77ca1fcbb7da6927c" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.742332 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-2-build" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.762950 4825 scope.go:117] "RemoveContainer" containerID="5798036970e89a43b32eb228c9e6bc296b9cd701758096c77ca1fcbb7da6927c" Feb 19 00:20:11 crc kubenswrapper[4825]: E0219 00:20:11.763460 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5798036970e89a43b32eb228c9e6bc296b9cd701758096c77ca1fcbb7da6927c\": container with ID starting with 5798036970e89a43b32eb228c9e6bc296b9cd701758096c77ca1fcbb7da6927c not found: ID does not exist" containerID="5798036970e89a43b32eb228c9e6bc296b9cd701758096c77ca1fcbb7da6927c" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.763498 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5798036970e89a43b32eb228c9e6bc296b9cd701758096c77ca1fcbb7da6927c"} err="failed to get container status \"5798036970e89a43b32eb228c9e6bc296b9cd701758096c77ca1fcbb7da6927c\": rpc error: code = NotFound desc = could not find container \"5798036970e89a43b32eb228c9e6bc296b9cd701758096c77ca1fcbb7da6927c\": container with ID starting with 5798036970e89a43b32eb228c9e6bc296b9cd701758096c77ca1fcbb7da6927c not found: ID does not exist" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.801361 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-dldcm-push\" (UniqueName: \"kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-builder-dockercfg-dldcm-push\") pod \"26ca51d5-7d94-4211-946b-75886190fd58\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.801519 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-container-storage-root\") pod \"26ca51d5-7d94-4211-946b-75886190fd58\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.801541 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/26ca51d5-7d94-4211-946b-75886190fd58-node-pullsecrets\") pod \"26ca51d5-7d94-4211-946b-75886190fd58\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.801565 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-container-storage-run\") pod \"26ca51d5-7d94-4211-946b-75886190fd58\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.801597 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-dldcm-pull\" (UniqueName: \"kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-builder-dockercfg-dldcm-pull\") pod \"26ca51d5-7d94-4211-946b-75886190fd58\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.801622 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-build-blob-cache\") pod \"26ca51d5-7d94-4211-946b-75886190fd58\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.801644 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-ca-bundles\") pod \"26ca51d5-7d94-4211-946b-75886190fd58\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.801679 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"26ca51d5-7d94-4211-946b-75886190fd58\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.801700 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxnxl\" (UniqueName: \"kubernetes.io/projected/26ca51d5-7d94-4211-946b-75886190fd58-kube-api-access-bxnxl\") pod \"26ca51d5-7d94-4211-946b-75886190fd58\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.801723 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-system-configs\") pod \"26ca51d5-7d94-4211-946b-75886190fd58\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.801761 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-proxy-ca-bundles\") pod \"26ca51d5-7d94-4211-946b-75886190fd58\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.801742 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26ca51d5-7d94-4211-946b-75886190fd58-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "26ca51d5-7d94-4211-946b-75886190fd58" (UID: "26ca51d5-7d94-4211-946b-75886190fd58"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.801827 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-buildworkdir\") pod \"26ca51d5-7d94-4211-946b-75886190fd58\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.801847 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/26ca51d5-7d94-4211-946b-75886190fd58-buildcachedir\") pod \"26ca51d5-7d94-4211-946b-75886190fd58\" (UID: \"26ca51d5-7d94-4211-946b-75886190fd58\") " Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.801947 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "26ca51d5-7d94-4211-946b-75886190fd58" (UID: "26ca51d5-7d94-4211-946b-75886190fd58"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.801989 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26ca51d5-7d94-4211-946b-75886190fd58-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "26ca51d5-7d94-4211-946b-75886190fd58" (UID: "26ca51d5-7d94-4211-946b-75886190fd58"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.802181 4825 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/26ca51d5-7d94-4211-946b-75886190fd58-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.802196 4825 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.802209 4825 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/26ca51d5-7d94-4211-946b-75886190fd58-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.802209 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "26ca51d5-7d94-4211-946b-75886190fd58" (UID: "26ca51d5-7d94-4211-946b-75886190fd58"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.802353 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "26ca51d5-7d94-4211-946b-75886190fd58" (UID: "26ca51d5-7d94-4211-946b-75886190fd58"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.802534 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "26ca51d5-7d94-4211-946b-75886190fd58" (UID: "26ca51d5-7d94-4211-946b-75886190fd58"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.802785 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "26ca51d5-7d94-4211-946b-75886190fd58" (UID: "26ca51d5-7d94-4211-946b-75886190fd58"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.802852 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "26ca51d5-7d94-4211-946b-75886190fd58" (UID: "26ca51d5-7d94-4211-946b-75886190fd58"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.803130 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "26ca51d5-7d94-4211-946b-75886190fd58" (UID: "26ca51d5-7d94-4211-946b-75886190fd58"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.808080 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "26ca51d5-7d94-4211-946b-75886190fd58" (UID: "26ca51d5-7d94-4211-946b-75886190fd58"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.808363 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-builder-dockercfg-dldcm-pull" (OuterVolumeSpecName: "builder-dockercfg-dldcm-pull") pod "26ca51d5-7d94-4211-946b-75886190fd58" (UID: "26ca51d5-7d94-4211-946b-75886190fd58"). InnerVolumeSpecName "builder-dockercfg-dldcm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.808630 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-builder-dockercfg-dldcm-push" (OuterVolumeSpecName: "builder-dockercfg-dldcm-push") pod "26ca51d5-7d94-4211-946b-75886190fd58" (UID: "26ca51d5-7d94-4211-946b-75886190fd58"). InnerVolumeSpecName "builder-dockercfg-dldcm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.808659 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ca51d5-7d94-4211-946b-75886190fd58-kube-api-access-bxnxl" (OuterVolumeSpecName: "kube-api-access-bxnxl") pod "26ca51d5-7d94-4211-946b-75886190fd58" (UID: "26ca51d5-7d94-4211-946b-75886190fd58"). InnerVolumeSpecName "kube-api-access-bxnxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.903682 4825 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.903729 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxnxl\" (UniqueName: \"kubernetes.io/projected/26ca51d5-7d94-4211-946b-75886190fd58-kube-api-access-bxnxl\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.903741 4825 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.903750 4825 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.903760 4825 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.903770 4825 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-dldcm-push\" (UniqueName: \"kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-builder-dockercfg-dldcm-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.903779 4825 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.903789 4825 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-dldcm-pull\" (UniqueName: \"kubernetes.io/secret/26ca51d5-7d94-4211-946b-75886190fd58-builder-dockercfg-dldcm-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.903799 4825 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/26ca51d5-7d94-4211-946b-75886190fd58-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:11 crc kubenswrapper[4825]: I0219 00:20:11.903807 4825 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26ca51d5-7d94-4211-946b-75886190fd58-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:12 crc kubenswrapper[4825]: I0219 00:20:12.053947 4825 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 00:20:12 crc kubenswrapper[4825]: I0219 00:20:12.097608 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 19 00:20:12 crc kubenswrapper[4825]: I0219 00:20:12.107568 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-framework-index-2-build"] Feb 19 00:20:13 crc kubenswrapper[4825]: I0219 00:20:13.081223 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ca51d5-7d94-4211-946b-75886190fd58" path="/var/lib/kubelet/pods/26ca51d5-7d94-4211-946b-75886190fd58/volumes" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.237703 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Feb 19 00:20:21 crc kubenswrapper[4825]: E0219 00:20:21.238780 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ca51d5-7d94-4211-946b-75886190fd58" containerName="git-clone" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.238795 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ca51d5-7d94-4211-946b-75886190fd58" containerName="git-clone" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.238908 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ca51d5-7d94-4211-946b-75886190fd58" containerName="git-clone" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.239881 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.242475 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-3-sys-config" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.242996 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-dldcm" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.243436 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.244150 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-3-ca" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.244166 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-3-global-ca" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.256220 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.256289 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6223fc10-fc8e-46eb-b120-b50564135d8a-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.256333 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6223fc10-fc8e-46eb-b120-b50564135d8a-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.256363 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz97g\" (UniqueName: \"kubernetes.io/projected/6223fc10-fc8e-46eb-b120-b50564135d8a-kube-api-access-qz97g\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.256385 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.256416 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.256448 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.256477 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-dldcm-push\" (UniqueName: \"kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-builder-dockercfg-dldcm-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.256529 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-dldcm-pull\" (UniqueName: \"kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-builder-dockercfg-dldcm-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.256567 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.256596 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.256625 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.256685 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.263327 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.358678 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.358854 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.358906 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.358956 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6223fc10-fc8e-46eb-b120-b50564135d8a-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.359038 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6223fc10-fc8e-46eb-b120-b50564135d8a-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.359091 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz97g\" (UniqueName: \"kubernetes.io/projected/6223fc10-fc8e-46eb-b120-b50564135d8a-kube-api-access-qz97g\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.359134 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.359176 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.359191 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6223fc10-fc8e-46eb-b120-b50564135d8a-buildcachedir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.359208 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.359297 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-dldcm-push\" (UniqueName: \"kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-builder-dockercfg-dldcm-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.359348 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-dldcm-pull\" (UniqueName: \"kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-builder-dockercfg-dldcm-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.359418 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.359454 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.359453 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-build-blob-cache\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.359620 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-container-storage-run\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.359338 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-container-storage-root\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.359966 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-buildworkdir\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.360138 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.360226 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-system-configs\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.360420 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-ca-bundles\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.360815 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6223fc10-fc8e-46eb-b120-b50564135d8a-node-pullsecrets\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.367496 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-dldcm-pull\" (UniqueName: \"kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-builder-dockercfg-dldcm-pull\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.367530 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.368186 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-dldcm-push\" (UniqueName: \"kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-builder-dockercfg-dldcm-push\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.406316 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz97g\" (UniqueName: \"kubernetes.io/projected/6223fc10-fc8e-46eb-b120-b50564135d8a-kube-api-access-qz97g\") pod \"service-telemetry-framework-index-3-build\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:21 crc kubenswrapper[4825]: I0219 00:20:21.558987 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:20:22 crc kubenswrapper[4825]: I0219 00:20:22.069937 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-3-build"] Feb 19 00:20:22 crc kubenswrapper[4825]: I0219 00:20:22.836838 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"6223fc10-fc8e-46eb-b120-b50564135d8a","Type":"ContainerStarted","Data":"8ce0fa00ce761a4051da44613e0ede687c6a8dd612324bb7f1d7680de6ef954c"} Feb 19 00:20:22 crc kubenswrapper[4825]: I0219 00:20:22.837289 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"6223fc10-fc8e-46eb-b120-b50564135d8a","Type":"ContainerStarted","Data":"e856969fcaeb8fb866b3843662fa52481b1ba68606af82fdeed4c5ba5df4c9ea"} Feb 19 00:20:23 crc kubenswrapper[4825]: I0219 00:20:23.845420 4825 generic.go:334] "Generic (PLEG): container finished" podID="6223fc10-fc8e-46eb-b120-b50564135d8a" containerID="8ce0fa00ce761a4051da44613e0ede687c6a8dd612324bb7f1d7680de6ef954c" exitCode=0 Feb 19 00:20:23 crc kubenswrapper[4825]: I0219 00:20:23.845522 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"6223fc10-fc8e-46eb-b120-b50564135d8a","Type":"ContainerDied","Data":"8ce0fa00ce761a4051da44613e0ede687c6a8dd612324bb7f1d7680de6ef954c"} Feb 19 00:20:24 crc kubenswrapper[4825]: I0219 00:20:24.860972 4825 generic.go:334] "Generic (PLEG): container finished" podID="6223fc10-fc8e-46eb-b120-b50564135d8a" containerID="33bed331a9346a3c6cc3d06c472f4a9f7822e45438e59f631a7d10a90919cf99" exitCode=0 Feb 19 00:20:24 crc kubenswrapper[4825]: I0219 00:20:24.861063 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"6223fc10-fc8e-46eb-b120-b50564135d8a","Type":"ContainerDied","Data":"33bed331a9346a3c6cc3d06c472f4a9f7822e45438e59f631a7d10a90919cf99"} Feb 19 00:20:24 crc kubenswrapper[4825]: I0219 00:20:24.922925 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-3-build_6223fc10-fc8e-46eb-b120-b50564135d8a/manage-dockerfile/0.log" Feb 19 00:20:25 crc kubenswrapper[4825]: I0219 00:20:25.871912 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"6223fc10-fc8e-46eb-b120-b50564135d8a","Type":"ContainerStarted","Data":"59e317fcecf4fc9785685390019085cf892e583da201b70abbcbaad222decbab"} Feb 19 00:20:25 crc kubenswrapper[4825]: I0219 00:20:25.929421 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-index-3-build" podStartSLOduration=4.92938455 podStartE2EDuration="4.92938455s" podCreationTimestamp="2026-02-19 00:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:20:25.926668157 +0000 UTC m=+771.617634214" watchObservedRunningTime="2026-02-19 00:20:25.92938455 +0000 UTC m=+771.620350647" Feb 19 00:20:28 crc kubenswrapper[4825]: I0219 00:20:28.823472 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:20:28 crc kubenswrapper[4825]: I0219 00:20:28.824003 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:20:58 crc kubenswrapper[4825]: I0219 00:20:58.823888 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:20:58 crc kubenswrapper[4825]: I0219 00:20:58.824706 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:21:07 crc kubenswrapper[4825]: I0219 00:21:07.180855 4825 generic.go:334] "Generic (PLEG): container finished" podID="6223fc10-fc8e-46eb-b120-b50564135d8a" containerID="59e317fcecf4fc9785685390019085cf892e583da201b70abbcbaad222decbab" exitCode=0 Feb 19 00:21:07 crc kubenswrapper[4825]: I0219 00:21:07.180914 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"6223fc10-fc8e-46eb-b120-b50564135d8a","Type":"ContainerDied","Data":"59e317fcecf4fc9785685390019085cf892e583da201b70abbcbaad222decbab"} Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.458528 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.600666 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-dldcm-push\" (UniqueName: \"kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-builder-dockercfg-dldcm-push\") pod \"6223fc10-fc8e-46eb-b120-b50564135d8a\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.600761 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-ca-bundles\") pod \"6223fc10-fc8e-46eb-b120-b50564135d8a\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.600862 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6223fc10-fc8e-46eb-b120-b50564135d8a-node-pullsecrets\") pod \"6223fc10-fc8e-46eb-b120-b50564135d8a\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.600900 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6223fc10-fc8e-46eb-b120-b50564135d8a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6223fc10-fc8e-46eb-b120-b50564135d8a" (UID: "6223fc10-fc8e-46eb-b120-b50564135d8a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.600906 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"6223fc10-fc8e-46eb-b120-b50564135d8a\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.600966 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-proxy-ca-bundles\") pod \"6223fc10-fc8e-46eb-b120-b50564135d8a\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.601028 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6223fc10-fc8e-46eb-b120-b50564135d8a-buildcachedir\") pod \"6223fc10-fc8e-46eb-b120-b50564135d8a\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.601047 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-system-configs\") pod \"6223fc10-fc8e-46eb-b120-b50564135d8a\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.601067 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-container-storage-run\") pod \"6223fc10-fc8e-46eb-b120-b50564135d8a\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.601101 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-dldcm-pull\" (UniqueName: \"kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-builder-dockercfg-dldcm-pull\") pod \"6223fc10-fc8e-46eb-b120-b50564135d8a\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.601125 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz97g\" (UniqueName: \"kubernetes.io/projected/6223fc10-fc8e-46eb-b120-b50564135d8a-kube-api-access-qz97g\") pod \"6223fc10-fc8e-46eb-b120-b50564135d8a\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.601146 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-container-storage-root\") pod \"6223fc10-fc8e-46eb-b120-b50564135d8a\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.601199 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-build-blob-cache\") pod \"6223fc10-fc8e-46eb-b120-b50564135d8a\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.601189 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6223fc10-fc8e-46eb-b120-b50564135d8a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "6223fc10-fc8e-46eb-b120-b50564135d8a" (UID: "6223fc10-fc8e-46eb-b120-b50564135d8a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.601230 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-buildworkdir\") pod \"6223fc10-fc8e-46eb-b120-b50564135d8a\" (UID: \"6223fc10-fc8e-46eb-b120-b50564135d8a\") " Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.601671 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "6223fc10-fc8e-46eb-b120-b50564135d8a" (UID: "6223fc10-fc8e-46eb-b120-b50564135d8a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.602021 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "6223fc10-fc8e-46eb-b120-b50564135d8a" (UID: "6223fc10-fc8e-46eb-b120-b50564135d8a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.602005 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "6223fc10-fc8e-46eb-b120-b50564135d8a" (UID: "6223fc10-fc8e-46eb-b120-b50564135d8a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.602530 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "6223fc10-fc8e-46eb-b120-b50564135d8a" (UID: "6223fc10-fc8e-46eb-b120-b50564135d8a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.603082 4825 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.603114 4825 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.603131 4825 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6223fc10-fc8e-46eb-b120-b50564135d8a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.603116 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "6223fc10-fc8e-46eb-b120-b50564135d8a" (UID: "6223fc10-fc8e-46eb-b120-b50564135d8a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.603146 4825 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.603249 4825 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6223fc10-fc8e-46eb-b120-b50564135d8a-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.603281 4825 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6223fc10-fc8e-46eb-b120-b50564135d8a-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.608612 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "6223fc10-fc8e-46eb-b120-b50564135d8a" (UID: "6223fc10-fc8e-46eb-b120-b50564135d8a"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.609276 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6223fc10-fc8e-46eb-b120-b50564135d8a-kube-api-access-qz97g" (OuterVolumeSpecName: "kube-api-access-qz97g") pod "6223fc10-fc8e-46eb-b120-b50564135d8a" (UID: "6223fc10-fc8e-46eb-b120-b50564135d8a"). InnerVolumeSpecName "kube-api-access-qz97g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.609299 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-builder-dockercfg-dldcm-pull" (OuterVolumeSpecName: "builder-dockercfg-dldcm-pull") pod "6223fc10-fc8e-46eb-b120-b50564135d8a" (UID: "6223fc10-fc8e-46eb-b120-b50564135d8a"). InnerVolumeSpecName "builder-dockercfg-dldcm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.609353 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-builder-dockercfg-dldcm-push" (OuterVolumeSpecName: "builder-dockercfg-dldcm-push") pod "6223fc10-fc8e-46eb-b120-b50564135d8a" (UID: "6223fc10-fc8e-46eb-b120-b50564135d8a"). InnerVolumeSpecName "builder-dockercfg-dldcm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.704946 4825 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.705011 4825 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.705024 4825 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-dldcm-pull\" (UniqueName: \"kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-builder-dockercfg-dldcm-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.705036 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz97g\" (UniqueName: \"kubernetes.io/projected/6223fc10-fc8e-46eb-b120-b50564135d8a-kube-api-access-qz97g\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4825]: I0219 00:21:08.705047 4825 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-dldcm-push\" (UniqueName: \"kubernetes.io/secret/6223fc10-fc8e-46eb-b120-b50564135d8a-builder-dockercfg-dldcm-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.114663 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "6223fc10-fc8e-46eb-b120-b50564135d8a" (UID: "6223fc10-fc8e-46eb-b120-b50564135d8a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.207759 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-3-build" event={"ID":"6223fc10-fc8e-46eb-b120-b50564135d8a","Type":"ContainerDied","Data":"e856969fcaeb8fb866b3843662fa52481b1ba68606af82fdeed4c5ba5df4c9ea"} Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.207816 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e856969fcaeb8fb866b3843662fa52481b1ba68606af82fdeed4c5ba5df4c9ea" Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.207840 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-3-build" Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.211894 4825 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.444727 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-2rhwb"] Feb 19 00:21:09 crc kubenswrapper[4825]: E0219 00:21:09.445111 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6223fc10-fc8e-46eb-b120-b50564135d8a" containerName="git-clone" Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.445131 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="6223fc10-fc8e-46eb-b120-b50564135d8a" containerName="git-clone" Feb 19 00:21:09 crc kubenswrapper[4825]: E0219 00:21:09.445231 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6223fc10-fc8e-46eb-b120-b50564135d8a" containerName="docker-build" Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.445242 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="6223fc10-fc8e-46eb-b120-b50564135d8a" containerName="docker-build" Feb 19 00:21:09 crc kubenswrapper[4825]: E0219 00:21:09.445265 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6223fc10-fc8e-46eb-b120-b50564135d8a" containerName="manage-dockerfile" Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.445399 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="6223fc10-fc8e-46eb-b120-b50564135d8a" containerName="manage-dockerfile" Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.445565 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="6223fc10-fc8e-46eb-b120-b50564135d8a" containerName="docker-build" Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.446116 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2rhwb" Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.449906 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"infrawatch-operators-dockercfg-vl6hf" Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.452098 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-2rhwb"] Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.618108 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxdbq\" (UniqueName: \"kubernetes.io/projected/dfa94491-cd26-4c71-9c0e-b6ec0d172802-kube-api-access-hxdbq\") pod \"infrawatch-operators-2rhwb\" (UID: \"dfa94491-cd26-4c71-9c0e-b6ec0d172802\") " pod="service-telemetry/infrawatch-operators-2rhwb" Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.721225 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxdbq\" (UniqueName: \"kubernetes.io/projected/dfa94491-cd26-4c71-9c0e-b6ec0d172802-kube-api-access-hxdbq\") pod \"infrawatch-operators-2rhwb\" (UID: \"dfa94491-cd26-4c71-9c0e-b6ec0d172802\") " pod="service-telemetry/infrawatch-operators-2rhwb" Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.745761 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxdbq\" (UniqueName: \"kubernetes.io/projected/dfa94491-cd26-4c71-9c0e-b6ec0d172802-kube-api-access-hxdbq\") pod \"infrawatch-operators-2rhwb\" (UID: \"dfa94491-cd26-4c71-9c0e-b6ec0d172802\") " pod="service-telemetry/infrawatch-operators-2rhwb" Feb 19 00:21:09 crc kubenswrapper[4825]: I0219 00:21:09.764759 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2rhwb" Feb 19 00:21:10 crc kubenswrapper[4825]: I0219 00:21:10.047045 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-2rhwb"] Feb 19 00:21:10 crc kubenswrapper[4825]: I0219 00:21:10.215297 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2rhwb" event={"ID":"dfa94491-cd26-4c71-9c0e-b6ec0d172802","Type":"ContainerStarted","Data":"518aa4ed4448657b44cbed357453e180d28336989e1c0f9d6d2d177d89893306"} Feb 19 00:21:10 crc kubenswrapper[4825]: I0219 00:21:10.705000 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "6223fc10-fc8e-46eb-b120-b50564135d8a" (UID: "6223fc10-fc8e-46eb-b120-b50564135d8a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:21:10 crc kubenswrapper[4825]: I0219 00:21:10.739786 4825 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6223fc10-fc8e-46eb-b120-b50564135d8a-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:13 crc kubenswrapper[4825]: I0219 00:21:13.821279 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-2rhwb"] Feb 19 00:21:14 crc kubenswrapper[4825]: I0219 00:21:14.632453 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-6jzhg"] Feb 19 00:21:14 crc kubenswrapper[4825]: I0219 00:21:14.633379 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-6jzhg" Feb 19 00:21:14 crc kubenswrapper[4825]: I0219 00:21:14.646206 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-6jzhg"] Feb 19 00:21:14 crc kubenswrapper[4825]: I0219 00:21:14.808183 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vg4d\" (UniqueName: \"kubernetes.io/projected/defbd6db-e287-4ee5-b651-05a45feb7ae4-kube-api-access-9vg4d\") pod \"infrawatch-operators-6jzhg\" (UID: \"defbd6db-e287-4ee5-b651-05a45feb7ae4\") " pod="service-telemetry/infrawatch-operators-6jzhg" Feb 19 00:21:14 crc kubenswrapper[4825]: I0219 00:21:14.909982 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vg4d\" (UniqueName: \"kubernetes.io/projected/defbd6db-e287-4ee5-b651-05a45feb7ae4-kube-api-access-9vg4d\") pod \"infrawatch-operators-6jzhg\" (UID: \"defbd6db-e287-4ee5-b651-05a45feb7ae4\") " pod="service-telemetry/infrawatch-operators-6jzhg" Feb 19 00:21:14 crc kubenswrapper[4825]: I0219 00:21:14.932195 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vg4d\" (UniqueName: \"kubernetes.io/projected/defbd6db-e287-4ee5-b651-05a45feb7ae4-kube-api-access-9vg4d\") pod \"infrawatch-operators-6jzhg\" (UID: \"defbd6db-e287-4ee5-b651-05a45feb7ae4\") " pod="service-telemetry/infrawatch-operators-6jzhg" Feb 19 00:21:14 crc kubenswrapper[4825]: I0219 00:21:14.964584 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-6jzhg" Feb 19 00:21:22 crc kubenswrapper[4825]: I0219 00:21:22.394441 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-6jzhg"] Feb 19 00:21:23 crc kubenswrapper[4825]: I0219 00:21:23.301284 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2rhwb" event={"ID":"dfa94491-cd26-4c71-9c0e-b6ec0d172802","Type":"ContainerStarted","Data":"2d9b128c9ccd50927533c7134aa7ee840caa71fe84db71f45c5ab4b2cbe58f57"} Feb 19 00:21:23 crc kubenswrapper[4825]: I0219 00:21:23.301440 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-2rhwb" podUID="dfa94491-cd26-4c71-9c0e-b6ec0d172802" containerName="registry-server" containerID="cri-o://2d9b128c9ccd50927533c7134aa7ee840caa71fe84db71f45c5ab4b2cbe58f57" gracePeriod=2 Feb 19 00:21:23 crc kubenswrapper[4825]: I0219 00:21:23.303669 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-6jzhg" event={"ID":"defbd6db-e287-4ee5-b651-05a45feb7ae4","Type":"ContainerStarted","Data":"035cc121e064ebf7b7573c483a722c75a32afbdc2b8ef07e43f612aec1cf590e"} Feb 19 00:21:23 crc kubenswrapper[4825]: I0219 00:21:23.303741 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-6jzhg" event={"ID":"defbd6db-e287-4ee5-b651-05a45feb7ae4","Type":"ContainerStarted","Data":"f0647ec4ebf2caa416c1d008e118e863619281838dfb1492dc526593f2849226"} Feb 19 00:21:23 crc kubenswrapper[4825]: I0219 00:21:23.339464 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-2rhwb" podStartSLOduration=2.000290604 podStartE2EDuration="14.339429645s" podCreationTimestamp="2026-02-19 00:21:09 +0000 UTC" firstStartedPulling="2026-02-19 00:21:10.061694597 +0000 UTC m=+815.752660654" lastFinishedPulling="2026-02-19 00:21:22.400833648 +0000 UTC m=+828.091799695" observedRunningTime="2026-02-19 00:21:23.327509047 +0000 UTC m=+829.018475094" watchObservedRunningTime="2026-02-19 00:21:23.339429645 +0000 UTC m=+829.030395702" Feb 19 00:21:23 crc kubenswrapper[4825]: I0219 00:21:23.357276 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-6jzhg" podStartSLOduration=9.245831199 podStartE2EDuration="9.35724583s" podCreationTimestamp="2026-02-19 00:21:14 +0000 UTC" firstStartedPulling="2026-02-19 00:21:22.404890956 +0000 UTC m=+828.095857043" lastFinishedPulling="2026-02-19 00:21:22.516305627 +0000 UTC m=+828.207271674" observedRunningTime="2026-02-19 00:21:23.351588509 +0000 UTC m=+829.042554556" watchObservedRunningTime="2026-02-19 00:21:23.35724583 +0000 UTC m=+829.048211887" Feb 19 00:21:23 crc kubenswrapper[4825]: I0219 00:21:23.723620 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2rhwb" Feb 19 00:21:23 crc kubenswrapper[4825]: I0219 00:21:23.862577 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxdbq\" (UniqueName: \"kubernetes.io/projected/dfa94491-cd26-4c71-9c0e-b6ec0d172802-kube-api-access-hxdbq\") pod \"dfa94491-cd26-4c71-9c0e-b6ec0d172802\" (UID: \"dfa94491-cd26-4c71-9c0e-b6ec0d172802\") " Feb 19 00:21:23 crc kubenswrapper[4825]: I0219 00:21:23.872930 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfa94491-cd26-4c71-9c0e-b6ec0d172802-kube-api-access-hxdbq" (OuterVolumeSpecName: "kube-api-access-hxdbq") pod "dfa94491-cd26-4c71-9c0e-b6ec0d172802" (UID: "dfa94491-cd26-4c71-9c0e-b6ec0d172802"). InnerVolumeSpecName "kube-api-access-hxdbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:21:23 crc kubenswrapper[4825]: I0219 00:21:23.965230 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxdbq\" (UniqueName: \"kubernetes.io/projected/dfa94491-cd26-4c71-9c0e-b6ec0d172802-kube-api-access-hxdbq\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:24 crc kubenswrapper[4825]: I0219 00:21:24.313509 4825 generic.go:334] "Generic (PLEG): container finished" podID="dfa94491-cd26-4c71-9c0e-b6ec0d172802" containerID="2d9b128c9ccd50927533c7134aa7ee840caa71fe84db71f45c5ab4b2cbe58f57" exitCode=0 Feb 19 00:21:24 crc kubenswrapper[4825]: I0219 00:21:24.313596 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2rhwb" Feb 19 00:21:24 crc kubenswrapper[4825]: I0219 00:21:24.313653 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2rhwb" event={"ID":"dfa94491-cd26-4c71-9c0e-b6ec0d172802","Type":"ContainerDied","Data":"2d9b128c9ccd50927533c7134aa7ee840caa71fe84db71f45c5ab4b2cbe58f57"} Feb 19 00:21:24 crc kubenswrapper[4825]: I0219 00:21:24.313684 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2rhwb" event={"ID":"dfa94491-cd26-4c71-9c0e-b6ec0d172802","Type":"ContainerDied","Data":"518aa4ed4448657b44cbed357453e180d28336989e1c0f9d6d2d177d89893306"} Feb 19 00:21:24 crc kubenswrapper[4825]: I0219 00:21:24.313706 4825 scope.go:117] "RemoveContainer" containerID="2d9b128c9ccd50927533c7134aa7ee840caa71fe84db71f45c5ab4b2cbe58f57" Feb 19 00:21:24 crc kubenswrapper[4825]: I0219 00:21:24.334374 4825 scope.go:117] "RemoveContainer" containerID="2d9b128c9ccd50927533c7134aa7ee840caa71fe84db71f45c5ab4b2cbe58f57" Feb 19 00:21:24 crc kubenswrapper[4825]: E0219 00:21:24.335244 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d9b128c9ccd50927533c7134aa7ee840caa71fe84db71f45c5ab4b2cbe58f57\": container with ID starting with 2d9b128c9ccd50927533c7134aa7ee840caa71fe84db71f45c5ab4b2cbe58f57 not found: ID does not exist" containerID="2d9b128c9ccd50927533c7134aa7ee840caa71fe84db71f45c5ab4b2cbe58f57" Feb 19 00:21:24 crc kubenswrapper[4825]: I0219 00:21:24.335302 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9b128c9ccd50927533c7134aa7ee840caa71fe84db71f45c5ab4b2cbe58f57"} err="failed to get container status \"2d9b128c9ccd50927533c7134aa7ee840caa71fe84db71f45c5ab4b2cbe58f57\": rpc error: code = NotFound desc = could not find container \"2d9b128c9ccd50927533c7134aa7ee840caa71fe84db71f45c5ab4b2cbe58f57\": container with ID starting with 2d9b128c9ccd50927533c7134aa7ee840caa71fe84db71f45c5ab4b2cbe58f57 not found: ID does not exist" Feb 19 00:21:24 crc kubenswrapper[4825]: I0219 00:21:24.353853 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-2rhwb"] Feb 19 00:21:24 crc kubenswrapper[4825]: I0219 00:21:24.358569 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-2rhwb"] Feb 19 00:21:24 crc kubenswrapper[4825]: I0219 00:21:24.966122 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-6jzhg" Feb 19 00:21:24 crc kubenswrapper[4825]: I0219 00:21:24.966174 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-6jzhg" Feb 19 00:21:25 crc kubenswrapper[4825]: I0219 00:21:25.019242 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-6jzhg" Feb 19 00:21:25 crc kubenswrapper[4825]: I0219 00:21:25.079308 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfa94491-cd26-4c71-9c0e-b6ec0d172802" path="/var/lib/kubelet/pods/dfa94491-cd26-4c71-9c0e-b6ec0d172802/volumes" Feb 19 00:21:28 crc kubenswrapper[4825]: I0219 00:21:28.823989 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:21:28 crc kubenswrapper[4825]: I0219 00:21:28.824452 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:21:28 crc kubenswrapper[4825]: I0219 00:21:28.824507 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:21:28 crc kubenswrapper[4825]: I0219 00:21:28.825222 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4528630abd9298fa6ddba9ae1d069773d3681c2d7b7aa972cb3ffb2f6b64f7c"} pod="openshift-machine-config-operator/machine-config-daemon-tggq9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:21:28 crc kubenswrapper[4825]: I0219 00:21:28.825285 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" containerID="cri-o://f4528630abd9298fa6ddba9ae1d069773d3681c2d7b7aa972cb3ffb2f6b64f7c" gracePeriod=600 Feb 19 00:21:29 crc kubenswrapper[4825]: I0219 00:21:29.365899 4825 generic.go:334] "Generic (PLEG): container finished" podID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerID="f4528630abd9298fa6ddba9ae1d069773d3681c2d7b7aa972cb3ffb2f6b64f7c" exitCode=0 Feb 19 00:21:29 crc kubenswrapper[4825]: I0219 00:21:29.365960 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerDied","Data":"f4528630abd9298fa6ddba9ae1d069773d3681c2d7b7aa972cb3ffb2f6b64f7c"} Feb 19 00:21:29 crc kubenswrapper[4825]: I0219 00:21:29.366423 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerStarted","Data":"f27439f26e1215161e345d1607b94ea7543caa15aa262252676acb2360916f66"} Feb 19 00:21:29 crc kubenswrapper[4825]: I0219 00:21:29.366459 4825 scope.go:117] "RemoveContainer" containerID="3977a1de60d33698055567352ee370d0b71d26733409f8b00d78c8c89781f897" Feb 19 00:21:35 crc kubenswrapper[4825]: I0219 00:21:35.007177 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-6jzhg" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.170224 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt"] Feb 19 00:21:47 crc kubenswrapper[4825]: E0219 00:21:47.171433 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa94491-cd26-4c71-9c0e-b6ec0d172802" containerName="registry-server" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.171451 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa94491-cd26-4c71-9c0e-b6ec0d172802" containerName="registry-server" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.171633 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa94491-cd26-4c71-9c0e-b6ec0d172802" containerName="registry-server" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.172589 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.187708 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt"] Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.232211 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spkcc\" (UniqueName: \"kubernetes.io/projected/497d7b72-86fd-46ce-97a6-6ecaf57777fb-kube-api-access-spkcc\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt\" (UID: \"497d7b72-86fd-46ce-97a6-6ecaf57777fb\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.232278 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/497d7b72-86fd-46ce-97a6-6ecaf57777fb-util\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt\" (UID: \"497d7b72-86fd-46ce-97a6-6ecaf57777fb\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.232312 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/497d7b72-86fd-46ce-97a6-6ecaf57777fb-bundle\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt\" (UID: \"497d7b72-86fd-46ce-97a6-6ecaf57777fb\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.333495 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spkcc\" (UniqueName: \"kubernetes.io/projected/497d7b72-86fd-46ce-97a6-6ecaf57777fb-kube-api-access-spkcc\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt\" (UID: \"497d7b72-86fd-46ce-97a6-6ecaf57777fb\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.333618 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/497d7b72-86fd-46ce-97a6-6ecaf57777fb-util\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt\" (UID: \"497d7b72-86fd-46ce-97a6-6ecaf57777fb\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.333651 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/497d7b72-86fd-46ce-97a6-6ecaf57777fb-bundle\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt\" (UID: \"497d7b72-86fd-46ce-97a6-6ecaf57777fb\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.334575 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/497d7b72-86fd-46ce-97a6-6ecaf57777fb-bundle\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt\" (UID: \"497d7b72-86fd-46ce-97a6-6ecaf57777fb\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.334907 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/497d7b72-86fd-46ce-97a6-6ecaf57777fb-util\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt\" (UID: \"497d7b72-86fd-46ce-97a6-6ecaf57777fb\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.365911 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spkcc\" (UniqueName: \"kubernetes.io/projected/497d7b72-86fd-46ce-97a6-6ecaf57777fb-kube-api-access-spkcc\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt\" (UID: \"497d7b72-86fd-46ce-97a6-6ecaf57777fb\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.493822 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.719527 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt"] Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.979559 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz"] Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.981490 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.983834 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 00:21:47 crc kubenswrapper[4825]: I0219 00:21:47.986030 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz"] Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.047757 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c60edbef-45ae-4b90-8ecc-f289c774e0c6-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz\" (UID: \"c60edbef-45ae-4b90-8ecc-f289c774e0c6\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.047884 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqszc\" (UniqueName: \"kubernetes.io/projected/c60edbef-45ae-4b90-8ecc-f289c774e0c6-kube-api-access-cqszc\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz\" (UID: \"c60edbef-45ae-4b90-8ecc-f289c774e0c6\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.047938 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c60edbef-45ae-4b90-8ecc-f289c774e0c6-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz\" (UID: \"c60edbef-45ae-4b90-8ecc-f289c774e0c6\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.149598 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c60edbef-45ae-4b90-8ecc-f289c774e0c6-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz\" (UID: \"c60edbef-45ae-4b90-8ecc-f289c774e0c6\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.149678 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c60edbef-45ae-4b90-8ecc-f289c774e0c6-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz\" (UID: \"c60edbef-45ae-4b90-8ecc-f289c774e0c6\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.149765 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqszc\" (UniqueName: \"kubernetes.io/projected/c60edbef-45ae-4b90-8ecc-f289c774e0c6-kube-api-access-cqszc\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz\" (UID: \"c60edbef-45ae-4b90-8ecc-f289c774e0c6\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.150348 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c60edbef-45ae-4b90-8ecc-f289c774e0c6-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz\" (UID: \"c60edbef-45ae-4b90-8ecc-f289c774e0c6\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.150661 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c60edbef-45ae-4b90-8ecc-f289c774e0c6-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz\" (UID: \"c60edbef-45ae-4b90-8ecc-f289c774e0c6\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.177339 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqszc\" (UniqueName: \"kubernetes.io/projected/c60edbef-45ae-4b90-8ecc-f289c774e0c6-kube-api-access-cqszc\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz\" (UID: \"c60edbef-45ae-4b90-8ecc-f289c774e0c6\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.297411 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.527771 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz"] Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.537681 4825 generic.go:334] "Generic (PLEG): container finished" podID="497d7b72-86fd-46ce-97a6-6ecaf57777fb" containerID="78696cc05aa4de1ea34123fb6ec6940b00788607b08c2cdda993e143ba0556c6" exitCode=0 Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.537759 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" event={"ID":"497d7b72-86fd-46ce-97a6-6ecaf57777fb","Type":"ContainerDied","Data":"78696cc05aa4de1ea34123fb6ec6940b00788607b08c2cdda993e143ba0556c6"} Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.537840 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" event={"ID":"497d7b72-86fd-46ce-97a6-6ecaf57777fb","Type":"ContainerStarted","Data":"449ba95bdac5ee7e1ed85901f34f2bfe354cb3431d039d37f037a4c63774e393"} Feb 19 00:21:48 crc kubenswrapper[4825]: W0219 00:21:48.572203 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc60edbef_45ae_4b90_8ecc_f289c774e0c6.slice/crio-16020c59ba95d881ed372d9997bb4b4547b3e84db8f0c882281192ca78b0f1eb WatchSource:0}: Error finding container 16020c59ba95d881ed372d9997bb4b4547b3e84db8f0c882281192ca78b0f1eb: Status 404 returned error can't find the container with id 16020c59ba95d881ed372d9997bb4b4547b3e84db8f0c882281192ca78b0f1eb Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.964111 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l"] Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.967354 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" Feb 19 00:21:48 crc kubenswrapper[4825]: I0219 00:21:48.998654 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l"] Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.064472 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhxpl\" (UniqueName: \"kubernetes.io/projected/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-kube-api-access-bhxpl\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l\" (UID: \"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.064553 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-bundle\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l\" (UID: \"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.064801 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-util\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l\" (UID: \"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.166265 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-util\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l\" (UID: \"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.166387 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhxpl\" (UniqueName: \"kubernetes.io/projected/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-kube-api-access-bhxpl\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l\" (UID: \"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.166845 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-bundle\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l\" (UID: \"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.166939 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-util\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l\" (UID: \"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.167137 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-bundle\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l\" (UID: \"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.193784 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhxpl\" (UniqueName: \"kubernetes.io/projected/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-kube-api-access-bhxpl\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l\" (UID: \"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.324625 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.528082 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l"] Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.549095 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" event={"ID":"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8","Type":"ContainerStarted","Data":"4cefb128c514715f0211429e19ab8e32f16edd066887d4e9d79817af32e0b5ab"} Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.551709 4825 generic.go:334] "Generic (PLEG): container finished" podID="497d7b72-86fd-46ce-97a6-6ecaf57777fb" containerID="0b96366d0df6b56d3fb0ce8d66e3bcf66d72e5498f355788648ef2157685063c" exitCode=0 Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.552601 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" event={"ID":"497d7b72-86fd-46ce-97a6-6ecaf57777fb","Type":"ContainerDied","Data":"0b96366d0df6b56d3fb0ce8d66e3bcf66d72e5498f355788648ef2157685063c"} Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.555111 4825 generic.go:334] "Generic (PLEG): container finished" podID="c60edbef-45ae-4b90-8ecc-f289c774e0c6" containerID="560582efcba59bb56850a752b3456714f4fdb94352c32ff33ca33893e3a8e16d" exitCode=0 Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.555142 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" event={"ID":"c60edbef-45ae-4b90-8ecc-f289c774e0c6","Type":"ContainerDied","Data":"560582efcba59bb56850a752b3456714f4fdb94352c32ff33ca33893e3a8e16d"} Feb 19 00:21:49 crc kubenswrapper[4825]: I0219 00:21:49.555163 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" event={"ID":"c60edbef-45ae-4b90-8ecc-f289c774e0c6","Type":"ContainerStarted","Data":"16020c59ba95d881ed372d9997bb4b4547b3e84db8f0c882281192ca78b0f1eb"} Feb 19 00:21:50 crc kubenswrapper[4825]: I0219 00:21:50.569642 4825 generic.go:334] "Generic (PLEG): container finished" podID="7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8" containerID="9c09cbc0909a589a38138660cc526459d76c363a56ec54d4fd1d88c2fd4e7d8d" exitCode=0 Feb 19 00:21:50 crc kubenswrapper[4825]: I0219 00:21:50.569748 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" event={"ID":"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8","Type":"ContainerDied","Data":"9c09cbc0909a589a38138660cc526459d76c363a56ec54d4fd1d88c2fd4e7d8d"} Feb 19 00:21:50 crc kubenswrapper[4825]: I0219 00:21:50.584130 4825 generic.go:334] "Generic (PLEG): container finished" podID="497d7b72-86fd-46ce-97a6-6ecaf57777fb" containerID="5b757b9f3def221bc2556384ee160e5d8a14e68e215f63424c24ba939a8d5290" exitCode=0 Feb 19 00:21:50 crc kubenswrapper[4825]: I0219 00:21:50.584221 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" event={"ID":"497d7b72-86fd-46ce-97a6-6ecaf57777fb","Type":"ContainerDied","Data":"5b757b9f3def221bc2556384ee160e5d8a14e68e215f63424c24ba939a8d5290"} Feb 19 00:21:51 crc kubenswrapper[4825]: I0219 00:21:51.591882 4825 generic.go:334] "Generic (PLEG): container finished" podID="c60edbef-45ae-4b90-8ecc-f289c774e0c6" containerID="4bd45740fd8f7273f97d8e50577aac3f3e4dc76c69d46757db1e3eaeebc22bce" exitCode=0 Feb 19 00:21:51 crc kubenswrapper[4825]: I0219 00:21:51.591955 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" event={"ID":"c60edbef-45ae-4b90-8ecc-f289c774e0c6","Type":"ContainerDied","Data":"4bd45740fd8f7273f97d8e50577aac3f3e4dc76c69d46757db1e3eaeebc22bce"} Feb 19 00:21:51 crc kubenswrapper[4825]: I0219 00:21:51.595899 4825 generic.go:334] "Generic (PLEG): container finished" podID="7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8" containerID="0039ca93db5423f3c08a87d63164e874534b5a58167c00f980837233155fe10e" exitCode=0 Feb 19 00:21:51 crc kubenswrapper[4825]: I0219 00:21:51.596826 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" event={"ID":"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8","Type":"ContainerDied","Data":"0039ca93db5423f3c08a87d63164e874534b5a58167c00f980837233155fe10e"} Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.068105 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.217255 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/497d7b72-86fd-46ce-97a6-6ecaf57777fb-bundle\") pod \"497d7b72-86fd-46ce-97a6-6ecaf57777fb\" (UID: \"497d7b72-86fd-46ce-97a6-6ecaf57777fb\") " Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.217716 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spkcc\" (UniqueName: \"kubernetes.io/projected/497d7b72-86fd-46ce-97a6-6ecaf57777fb-kube-api-access-spkcc\") pod \"497d7b72-86fd-46ce-97a6-6ecaf57777fb\" (UID: \"497d7b72-86fd-46ce-97a6-6ecaf57777fb\") " Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.217817 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/497d7b72-86fd-46ce-97a6-6ecaf57777fb-util\") pod \"497d7b72-86fd-46ce-97a6-6ecaf57777fb\" (UID: \"497d7b72-86fd-46ce-97a6-6ecaf57777fb\") " Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.219989 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/497d7b72-86fd-46ce-97a6-6ecaf57777fb-bundle" (OuterVolumeSpecName: "bundle") pod "497d7b72-86fd-46ce-97a6-6ecaf57777fb" (UID: "497d7b72-86fd-46ce-97a6-6ecaf57777fb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.226928 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/497d7b72-86fd-46ce-97a6-6ecaf57777fb-kube-api-access-spkcc" (OuterVolumeSpecName: "kube-api-access-spkcc") pod "497d7b72-86fd-46ce-97a6-6ecaf57777fb" (UID: "497d7b72-86fd-46ce-97a6-6ecaf57777fb"). InnerVolumeSpecName "kube-api-access-spkcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.241004 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/497d7b72-86fd-46ce-97a6-6ecaf57777fb-util" (OuterVolumeSpecName: "util") pod "497d7b72-86fd-46ce-97a6-6ecaf57777fb" (UID: "497d7b72-86fd-46ce-97a6-6ecaf57777fb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.319816 4825 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/497d7b72-86fd-46ce-97a6-6ecaf57777fb-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.319866 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spkcc\" (UniqueName: \"kubernetes.io/projected/497d7b72-86fd-46ce-97a6-6ecaf57777fb-kube-api-access-spkcc\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.319882 4825 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/497d7b72-86fd-46ce-97a6-6ecaf57777fb-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.604143 4825 generic.go:334] "Generic (PLEG): container finished" podID="7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8" containerID="f50c0a85a89a2dc6f4d081a8b26f30771981df60243c911f47bd2da50aea3c22" exitCode=0 Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.604757 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" event={"ID":"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8","Type":"ContainerDied","Data":"f50c0a85a89a2dc6f4d081a8b26f30771981df60243c911f47bd2da50aea3c22"} Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.607308 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" event={"ID":"497d7b72-86fd-46ce-97a6-6ecaf57777fb","Type":"ContainerDied","Data":"449ba95bdac5ee7e1ed85901f34f2bfe354cb3431d039d37f037a4c63774e393"} Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.607341 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="449ba95bdac5ee7e1ed85901f34f2bfe354cb3431d039d37f037a4c63774e393" Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.607403 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bj8nrt" Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.615719 4825 generic.go:334] "Generic (PLEG): container finished" podID="c60edbef-45ae-4b90-8ecc-f289c774e0c6" containerID="8e22026cc2a54c521b887dad855957dc829636fbf5b67140bc089dad00847abc" exitCode=0 Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.615780 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" event={"ID":"c60edbef-45ae-4b90-8ecc-f289c774e0c6","Type":"ContainerDied","Data":"8e22026cc2a54c521b887dad855957dc829636fbf5b67140bc089dad00847abc"} Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.900480 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mt2p5"] Feb 19 00:21:52 crc kubenswrapper[4825]: E0219 00:21:52.900778 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497d7b72-86fd-46ce-97a6-6ecaf57777fb" containerName="extract" Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.900793 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="497d7b72-86fd-46ce-97a6-6ecaf57777fb" containerName="extract" Feb 19 00:21:52 crc kubenswrapper[4825]: E0219 00:21:52.900801 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497d7b72-86fd-46ce-97a6-6ecaf57777fb" containerName="pull" Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.900806 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="497d7b72-86fd-46ce-97a6-6ecaf57777fb" containerName="pull" Feb 19 00:21:52 crc kubenswrapper[4825]: E0219 00:21:52.900816 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497d7b72-86fd-46ce-97a6-6ecaf57777fb" containerName="util" Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.900824 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="497d7b72-86fd-46ce-97a6-6ecaf57777fb" containerName="util" Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.900951 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="497d7b72-86fd-46ce-97a6-6ecaf57777fb" containerName="extract" Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.901963 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:21:52 crc kubenswrapper[4825]: I0219 00:21:52.916395 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mt2p5"] Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.056702 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8ng7\" (UniqueName: \"kubernetes.io/projected/397c4697-38bb-47a6-914a-dcbf2be403d7-kube-api-access-s8ng7\") pod \"redhat-operators-mt2p5\" (UID: \"397c4697-38bb-47a6-914a-dcbf2be403d7\") " pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.056775 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397c4697-38bb-47a6-914a-dcbf2be403d7-utilities\") pod \"redhat-operators-mt2p5\" (UID: \"397c4697-38bb-47a6-914a-dcbf2be403d7\") " pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.056820 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397c4697-38bb-47a6-914a-dcbf2be403d7-catalog-content\") pod \"redhat-operators-mt2p5\" (UID: \"397c4697-38bb-47a6-914a-dcbf2be403d7\") " pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.158394 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397c4697-38bb-47a6-914a-dcbf2be403d7-catalog-content\") pod \"redhat-operators-mt2p5\" (UID: \"397c4697-38bb-47a6-914a-dcbf2be403d7\") " pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.158599 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8ng7\" (UniqueName: \"kubernetes.io/projected/397c4697-38bb-47a6-914a-dcbf2be403d7-kube-api-access-s8ng7\") pod \"redhat-operators-mt2p5\" (UID: \"397c4697-38bb-47a6-914a-dcbf2be403d7\") " pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.158678 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397c4697-38bb-47a6-914a-dcbf2be403d7-utilities\") pod \"redhat-operators-mt2p5\" (UID: \"397c4697-38bb-47a6-914a-dcbf2be403d7\") " pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.159305 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397c4697-38bb-47a6-914a-dcbf2be403d7-catalog-content\") pod \"redhat-operators-mt2p5\" (UID: \"397c4697-38bb-47a6-914a-dcbf2be403d7\") " pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.159366 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397c4697-38bb-47a6-914a-dcbf2be403d7-utilities\") pod \"redhat-operators-mt2p5\" (UID: \"397c4697-38bb-47a6-914a-dcbf2be403d7\") " pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.187334 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8ng7\" (UniqueName: \"kubernetes.io/projected/397c4697-38bb-47a6-914a-dcbf2be403d7-kube-api-access-s8ng7\") pod \"redhat-operators-mt2p5\" (UID: \"397c4697-38bb-47a6-914a-dcbf2be403d7\") " pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.218435 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.460748 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mt2p5"] Feb 19 00:21:53 crc kubenswrapper[4825]: W0219 00:21:53.477230 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod397c4697_38bb_47a6_914a_dcbf2be403d7.slice/crio-576d91456eeab2453af4a58bd0484f94f64994c50b5f9b49a2a5f5729c2bc906 WatchSource:0}: Error finding container 576d91456eeab2453af4a58bd0484f94f64994c50b5f9b49a2a5f5729c2bc906: Status 404 returned error can't find the container with id 576d91456eeab2453af4a58bd0484f94f64994c50b5f9b49a2a5f5729c2bc906 Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.629044 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mt2p5" event={"ID":"397c4697-38bb-47a6-914a-dcbf2be403d7","Type":"ContainerStarted","Data":"576d91456eeab2453af4a58bd0484f94f64994c50b5f9b49a2a5f5729c2bc906"} Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.860951 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.925770 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.971183 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c60edbef-45ae-4b90-8ecc-f289c774e0c6-util\") pod \"c60edbef-45ae-4b90-8ecc-f289c774e0c6\" (UID: \"c60edbef-45ae-4b90-8ecc-f289c774e0c6\") " Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.971242 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c60edbef-45ae-4b90-8ecc-f289c774e0c6-bundle\") pod \"c60edbef-45ae-4b90-8ecc-f289c774e0c6\" (UID: \"c60edbef-45ae-4b90-8ecc-f289c774e0c6\") " Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.971327 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqszc\" (UniqueName: \"kubernetes.io/projected/c60edbef-45ae-4b90-8ecc-f289c774e0c6-kube-api-access-cqszc\") pod \"c60edbef-45ae-4b90-8ecc-f289c774e0c6\" (UID: \"c60edbef-45ae-4b90-8ecc-f289c774e0c6\") " Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.973191 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c60edbef-45ae-4b90-8ecc-f289c774e0c6-bundle" (OuterVolumeSpecName: "bundle") pod "c60edbef-45ae-4b90-8ecc-f289c774e0c6" (UID: "c60edbef-45ae-4b90-8ecc-f289c774e0c6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:21:53 crc kubenswrapper[4825]: I0219 00:21:53.978437 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c60edbef-45ae-4b90-8ecc-f289c774e0c6-kube-api-access-cqszc" (OuterVolumeSpecName: "kube-api-access-cqszc") pod "c60edbef-45ae-4b90-8ecc-f289c774e0c6" (UID: "c60edbef-45ae-4b90-8ecc-f289c774e0c6"). InnerVolumeSpecName "kube-api-access-cqszc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.072969 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-bundle\") pod \"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8\" (UID: \"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8\") " Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.073123 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-util\") pod \"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8\" (UID: \"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8\") " Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.073183 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhxpl\" (UniqueName: \"kubernetes.io/projected/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-kube-api-access-bhxpl\") pod \"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8\" (UID: \"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8\") " Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.073624 4825 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c60edbef-45ae-4b90-8ecc-f289c774e0c6-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.073651 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqszc\" (UniqueName: \"kubernetes.io/projected/c60edbef-45ae-4b90-8ecc-f289c774e0c6-kube-api-access-cqszc\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.073899 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-bundle" (OuterVolumeSpecName: "bundle") pod "7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8" (UID: "7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.077675 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-kube-api-access-bhxpl" (OuterVolumeSpecName: "kube-api-access-bhxpl") pod "7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8" (UID: "7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8"). InnerVolumeSpecName "kube-api-access-bhxpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.090554 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-util" (OuterVolumeSpecName: "util") pod "7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8" (UID: "7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.175142 4825 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.175189 4825 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.175203 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhxpl\" (UniqueName: \"kubernetes.io/projected/7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8-kube-api-access-bhxpl\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.226849 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c60edbef-45ae-4b90-8ecc-f289c774e0c6-util" (OuterVolumeSpecName: "util") pod "c60edbef-45ae-4b90-8ecc-f289c774e0c6" (UID: "c60edbef-45ae-4b90-8ecc-f289c774e0c6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.276934 4825 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c60edbef-45ae-4b90-8ecc-f289c774e0c6-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.636858 4825 generic.go:334] "Generic (PLEG): container finished" podID="397c4697-38bb-47a6-914a-dcbf2be403d7" containerID="a078367e91103baa58fdde88d1ade6c0654b6994139d8b6be91492d2c2e1e8a3" exitCode=0 Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.636959 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mt2p5" event={"ID":"397c4697-38bb-47a6-914a-dcbf2be403d7","Type":"ContainerDied","Data":"a078367e91103baa58fdde88d1ade6c0654b6994139d8b6be91492d2c2e1e8a3"} Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.659271 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.659226 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz" event={"ID":"c60edbef-45ae-4b90-8ecc-f289c774e0c6","Type":"ContainerDied","Data":"16020c59ba95d881ed372d9997bb4b4547b3e84db8f0c882281192ca78b0f1eb"} Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.659455 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16020c59ba95d881ed372d9997bb4b4547b3e84db8f0c882281192ca78b0f1eb" Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.664226 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" event={"ID":"7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8","Type":"ContainerDied","Data":"4cefb128c514715f0211429e19ab8e32f16edd066887d4e9d79817af32e0b5ab"} Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.664285 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cefb128c514715f0211429e19ab8e32f16edd066887d4e9d79817af32e0b5ab" Feb 19 00:21:54 crc kubenswrapper[4825]: I0219 00:21:54.664322 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7ebqrc8l" Feb 19 00:21:55 crc kubenswrapper[4825]: I0219 00:21:55.674582 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mt2p5" event={"ID":"397c4697-38bb-47a6-914a-dcbf2be403d7","Type":"ContainerStarted","Data":"8f25abb663e89c060e0f766ca50b40ae449866be4a9b19645b49007fa1d89844"} Feb 19 00:21:56 crc kubenswrapper[4825]: I0219 00:21:56.687071 4825 generic.go:334] "Generic (PLEG): container finished" podID="397c4697-38bb-47a6-914a-dcbf2be403d7" containerID="8f25abb663e89c060e0f766ca50b40ae449866be4a9b19645b49007fa1d89844" exitCode=0 Feb 19 00:21:56 crc kubenswrapper[4825]: I0219 00:21:56.687166 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mt2p5" event={"ID":"397c4697-38bb-47a6-914a-dcbf2be403d7","Type":"ContainerDied","Data":"8f25abb663e89c060e0f766ca50b40ae449866be4a9b19645b49007fa1d89844"} Feb 19 00:21:57 crc kubenswrapper[4825]: I0219 00:21:57.717649 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mt2p5" event={"ID":"397c4697-38bb-47a6-914a-dcbf2be403d7","Type":"ContainerStarted","Data":"b0040001cfb8f1b9d58a42c542716ee91b96ee4448c25400b0861a6c6695f469"} Feb 19 00:21:57 crc kubenswrapper[4825]: I0219 00:21:57.754882 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mt2p5" podStartSLOduration=3.286797086 podStartE2EDuration="5.754860227s" podCreationTimestamp="2026-02-19 00:21:52 +0000 UTC" firstStartedPulling="2026-02-19 00:21:54.640287847 +0000 UTC m=+860.331253904" lastFinishedPulling="2026-02-19 00:21:57.108350998 +0000 UTC m=+862.799317045" observedRunningTime="2026-02-19 00:21:57.748750695 +0000 UTC m=+863.439716742" watchObservedRunningTime="2026-02-19 00:21:57.754860227 +0000 UTC m=+863.445826274" Feb 19 00:21:58 crc kubenswrapper[4825]: I0219 00:21:58.981671 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-55b89ddfb9-9hhqn"] Feb 19 00:21:58 crc kubenswrapper[4825]: E0219 00:21:58.981972 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8" containerName="pull" Feb 19 00:21:58 crc kubenswrapper[4825]: I0219 00:21:58.981991 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8" containerName="pull" Feb 19 00:21:58 crc kubenswrapper[4825]: E0219 00:21:58.982006 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60edbef-45ae-4b90-8ecc-f289c774e0c6" containerName="pull" Feb 19 00:21:58 crc kubenswrapper[4825]: I0219 00:21:58.982015 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60edbef-45ae-4b90-8ecc-f289c774e0c6" containerName="pull" Feb 19 00:21:58 crc kubenswrapper[4825]: E0219 00:21:58.982029 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8" containerName="extract" Feb 19 00:21:58 crc kubenswrapper[4825]: I0219 00:21:58.982038 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8" containerName="extract" Feb 19 00:21:58 crc kubenswrapper[4825]: E0219 00:21:58.982055 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60edbef-45ae-4b90-8ecc-f289c774e0c6" containerName="extract" Feb 19 00:21:58 crc kubenswrapper[4825]: I0219 00:21:58.982063 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60edbef-45ae-4b90-8ecc-f289c774e0c6" containerName="extract" Feb 19 00:21:58 crc kubenswrapper[4825]: E0219 00:21:58.982077 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8" containerName="util" Feb 19 00:21:58 crc kubenswrapper[4825]: I0219 00:21:58.982084 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8" containerName="util" Feb 19 00:21:58 crc kubenswrapper[4825]: E0219 00:21:58.982094 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60edbef-45ae-4b90-8ecc-f289c774e0c6" containerName="util" Feb 19 00:21:58 crc kubenswrapper[4825]: I0219 00:21:58.982108 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60edbef-45ae-4b90-8ecc-f289c774e0c6" containerName="util" Feb 19 00:21:58 crc kubenswrapper[4825]: I0219 00:21:58.982285 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="7166d18d-04cb-4d5d-bf2a-9a08f8ce52a8" containerName="extract" Feb 19 00:21:58 crc kubenswrapper[4825]: I0219 00:21:58.982301 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="c60edbef-45ae-4b90-8ecc-f289c774e0c6" containerName="extract" Feb 19 00:21:58 crc kubenswrapper[4825]: I0219 00:21:58.982976 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-55b89ddfb9-9hhqn" Feb 19 00:21:58 crc kubenswrapper[4825]: I0219 00:21:58.986309 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-jfngz" Feb 19 00:21:59 crc kubenswrapper[4825]: I0219 00:21:59.000155 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-55b89ddfb9-9hhqn"] Feb 19 00:21:59 crc kubenswrapper[4825]: I0219 00:21:59.163013 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/a9c56506-cd68-419c-9a93-a5c23dc0bc86-runner\") pod \"service-telemetry-operator-55b89ddfb9-9hhqn\" (UID: \"a9c56506-cd68-419c-9a93-a5c23dc0bc86\") " pod="service-telemetry/service-telemetry-operator-55b89ddfb9-9hhqn" Feb 19 00:21:59 crc kubenswrapper[4825]: I0219 00:21:59.163090 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f52w\" (UniqueName: \"kubernetes.io/projected/a9c56506-cd68-419c-9a93-a5c23dc0bc86-kube-api-access-9f52w\") pod \"service-telemetry-operator-55b89ddfb9-9hhqn\" (UID: \"a9c56506-cd68-419c-9a93-a5c23dc0bc86\") " pod="service-telemetry/service-telemetry-operator-55b89ddfb9-9hhqn" Feb 19 00:21:59 crc kubenswrapper[4825]: I0219 00:21:59.264236 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/a9c56506-cd68-419c-9a93-a5c23dc0bc86-runner\") pod \"service-telemetry-operator-55b89ddfb9-9hhqn\" (UID: \"a9c56506-cd68-419c-9a93-a5c23dc0bc86\") " pod="service-telemetry/service-telemetry-operator-55b89ddfb9-9hhqn" Feb 19 00:21:59 crc kubenswrapper[4825]: I0219 00:21:59.264308 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f52w\" (UniqueName: \"kubernetes.io/projected/a9c56506-cd68-419c-9a93-a5c23dc0bc86-kube-api-access-9f52w\") pod \"service-telemetry-operator-55b89ddfb9-9hhqn\" (UID: \"a9c56506-cd68-419c-9a93-a5c23dc0bc86\") " pod="service-telemetry/service-telemetry-operator-55b89ddfb9-9hhqn" Feb 19 00:21:59 crc kubenswrapper[4825]: I0219 00:21:59.265143 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/a9c56506-cd68-419c-9a93-a5c23dc0bc86-runner\") pod \"service-telemetry-operator-55b89ddfb9-9hhqn\" (UID: \"a9c56506-cd68-419c-9a93-a5c23dc0bc86\") " pod="service-telemetry/service-telemetry-operator-55b89ddfb9-9hhqn" Feb 19 00:21:59 crc kubenswrapper[4825]: I0219 00:21:59.293877 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f52w\" (UniqueName: \"kubernetes.io/projected/a9c56506-cd68-419c-9a93-a5c23dc0bc86-kube-api-access-9f52w\") pod \"service-telemetry-operator-55b89ddfb9-9hhqn\" (UID: \"a9c56506-cd68-419c-9a93-a5c23dc0bc86\") " pod="service-telemetry/service-telemetry-operator-55b89ddfb9-9hhqn" Feb 19 00:21:59 crc kubenswrapper[4825]: I0219 00:21:59.300496 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-55b89ddfb9-9hhqn" Feb 19 00:21:59 crc kubenswrapper[4825]: W0219 00:21:59.800918 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9c56506_cd68_419c_9a93_a5c23dc0bc86.slice/crio-71e7c7db5c2173d7844c8514d221b12024da283a648472018842aef1ee7d6d95 WatchSource:0}: Error finding container 71e7c7db5c2173d7844c8514d221b12024da283a648472018842aef1ee7d6d95: Status 404 returned error can't find the container with id 71e7c7db5c2173d7844c8514d221b12024da283a648472018842aef1ee7d6d95 Feb 19 00:21:59 crc kubenswrapper[4825]: I0219 00:21:59.802835 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-55b89ddfb9-9hhqn"] Feb 19 00:22:00 crc kubenswrapper[4825]: I0219 00:22:00.772308 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-55b89ddfb9-9hhqn" event={"ID":"a9c56506-cd68-419c-9a93-a5c23dc0bc86","Type":"ContainerStarted","Data":"71e7c7db5c2173d7844c8514d221b12024da283a648472018842aef1ee7d6d95"} Feb 19 00:22:00 crc kubenswrapper[4825]: I0219 00:22:00.787419 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-7nntl"] Feb 19 00:22:00 crc kubenswrapper[4825]: I0219 00:22:00.788397 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-7nntl" Feb 19 00:22:00 crc kubenswrapper[4825]: I0219 00:22:00.791952 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-nhdzj" Feb 19 00:22:00 crc kubenswrapper[4825]: I0219 00:22:00.803130 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-7nntl"] Feb 19 00:22:00 crc kubenswrapper[4825]: I0219 00:22:00.902034 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj67w\" (UniqueName: \"kubernetes.io/projected/d7368f66-3397-499a-bb7c-b0bcb7f6e919-kube-api-access-lj67w\") pod \"interconnect-operator-5bb49f789d-7nntl\" (UID: \"d7368f66-3397-499a-bb7c-b0bcb7f6e919\") " pod="service-telemetry/interconnect-operator-5bb49f789d-7nntl" Feb 19 00:22:01 crc kubenswrapper[4825]: I0219 00:22:01.003874 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj67w\" (UniqueName: \"kubernetes.io/projected/d7368f66-3397-499a-bb7c-b0bcb7f6e919-kube-api-access-lj67w\") pod \"interconnect-operator-5bb49f789d-7nntl\" (UID: \"d7368f66-3397-499a-bb7c-b0bcb7f6e919\") " pod="service-telemetry/interconnect-operator-5bb49f789d-7nntl" Feb 19 00:22:01 crc kubenswrapper[4825]: I0219 00:22:01.042259 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj67w\" (UniqueName: \"kubernetes.io/projected/d7368f66-3397-499a-bb7c-b0bcb7f6e919-kube-api-access-lj67w\") pod \"interconnect-operator-5bb49f789d-7nntl\" (UID: \"d7368f66-3397-499a-bb7c-b0bcb7f6e919\") " pod="service-telemetry/interconnect-operator-5bb49f789d-7nntl" Feb 19 00:22:01 crc kubenswrapper[4825]: I0219 00:22:01.153978 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-7nntl" Feb 19 00:22:01 crc kubenswrapper[4825]: I0219 00:22:01.674776 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-7nntl"] Feb 19 00:22:01 crc kubenswrapper[4825]: W0219 00:22:01.693816 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7368f66_3397_499a_bb7c_b0bcb7f6e919.slice/crio-3058fa8eb9fbacb347645f84651f54bd6e2ae4c7fc300460fa004a1a271c9ed4 WatchSource:0}: Error finding container 3058fa8eb9fbacb347645f84651f54bd6e2ae4c7fc300460fa004a1a271c9ed4: Status 404 returned error can't find the container with id 3058fa8eb9fbacb347645f84651f54bd6e2ae4c7fc300460fa004a1a271c9ed4 Feb 19 00:22:01 crc kubenswrapper[4825]: I0219 00:22:01.785809 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-7nntl" event={"ID":"d7368f66-3397-499a-bb7c-b0bcb7f6e919","Type":"ContainerStarted","Data":"3058fa8eb9fbacb347645f84651f54bd6e2ae4c7fc300460fa004a1a271c9ed4"} Feb 19 00:22:02 crc kubenswrapper[4825]: I0219 00:22:02.385432 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bbbc889bc-dk5kr"] Feb 19 00:22:02 crc kubenswrapper[4825]: I0219 00:22:02.386317 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bbbc889bc-dk5kr" Feb 19 00:22:02 crc kubenswrapper[4825]: I0219 00:22:02.391780 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-dswxx" Feb 19 00:22:02 crc kubenswrapper[4825]: I0219 00:22:02.402718 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bbbc889bc-dk5kr"] Feb 19 00:22:02 crc kubenswrapper[4825]: I0219 00:22:02.534591 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8e4a26b2-2aff-4606-a21e-b8bb948103ca-runner\") pod \"smart-gateway-operator-bbbc889bc-dk5kr\" (UID: \"8e4a26b2-2aff-4606-a21e-b8bb948103ca\") " pod="service-telemetry/smart-gateway-operator-bbbc889bc-dk5kr" Feb 19 00:22:02 crc kubenswrapper[4825]: I0219 00:22:02.534775 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p98fw\" (UniqueName: \"kubernetes.io/projected/8e4a26b2-2aff-4606-a21e-b8bb948103ca-kube-api-access-p98fw\") pod \"smart-gateway-operator-bbbc889bc-dk5kr\" (UID: \"8e4a26b2-2aff-4606-a21e-b8bb948103ca\") " pod="service-telemetry/smart-gateway-operator-bbbc889bc-dk5kr" Feb 19 00:22:02 crc kubenswrapper[4825]: I0219 00:22:02.638081 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8e4a26b2-2aff-4606-a21e-b8bb948103ca-runner\") pod \"smart-gateway-operator-bbbc889bc-dk5kr\" (UID: \"8e4a26b2-2aff-4606-a21e-b8bb948103ca\") " pod="service-telemetry/smart-gateway-operator-bbbc889bc-dk5kr" Feb 19 00:22:02 crc kubenswrapper[4825]: I0219 00:22:02.638184 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p98fw\" (UniqueName: \"kubernetes.io/projected/8e4a26b2-2aff-4606-a21e-b8bb948103ca-kube-api-access-p98fw\") pod \"smart-gateway-operator-bbbc889bc-dk5kr\" (UID: \"8e4a26b2-2aff-4606-a21e-b8bb948103ca\") " pod="service-telemetry/smart-gateway-operator-bbbc889bc-dk5kr" Feb 19 00:22:02 crc kubenswrapper[4825]: I0219 00:22:02.638942 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8e4a26b2-2aff-4606-a21e-b8bb948103ca-runner\") pod \"smart-gateway-operator-bbbc889bc-dk5kr\" (UID: \"8e4a26b2-2aff-4606-a21e-b8bb948103ca\") " pod="service-telemetry/smart-gateway-operator-bbbc889bc-dk5kr" Feb 19 00:22:02 crc kubenswrapper[4825]: I0219 00:22:02.666673 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p98fw\" (UniqueName: \"kubernetes.io/projected/8e4a26b2-2aff-4606-a21e-b8bb948103ca-kube-api-access-p98fw\") pod \"smart-gateway-operator-bbbc889bc-dk5kr\" (UID: \"8e4a26b2-2aff-4606-a21e-b8bb948103ca\") " pod="service-telemetry/smart-gateway-operator-bbbc889bc-dk5kr" Feb 19 00:22:02 crc kubenswrapper[4825]: I0219 00:22:02.705547 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bbbc889bc-dk5kr" Feb 19 00:22:03 crc kubenswrapper[4825]: I0219 00:22:03.222295 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:22:03 crc kubenswrapper[4825]: I0219 00:22:03.223703 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:22:03 crc kubenswrapper[4825]: I0219 00:22:03.535783 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bbbc889bc-dk5kr"] Feb 19 00:22:03 crc kubenswrapper[4825]: W0219 00:22:03.561769 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e4a26b2_2aff_4606_a21e_b8bb948103ca.slice/crio-a4165d0330adf98be1df70096e2d2e4e3d4f2c57d9ab44f6cf179b26579b3f0b WatchSource:0}: Error finding container a4165d0330adf98be1df70096e2d2e4e3d4f2c57d9ab44f6cf179b26579b3f0b: Status 404 returned error can't find the container with id a4165d0330adf98be1df70096e2d2e4e3d4f2c57d9ab44f6cf179b26579b3f0b Feb 19 00:22:03 crc kubenswrapper[4825]: I0219 00:22:03.812590 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bbbc889bc-dk5kr" event={"ID":"8e4a26b2-2aff-4606-a21e-b8bb948103ca","Type":"ContainerStarted","Data":"a4165d0330adf98be1df70096e2d2e4e3d4f2c57d9ab44f6cf179b26579b3f0b"} Feb 19 00:22:04 crc kubenswrapper[4825]: I0219 00:22:04.309208 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mt2p5" podUID="397c4697-38bb-47a6-914a-dcbf2be403d7" containerName="registry-server" probeResult="failure" output=< Feb 19 00:22:04 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Feb 19 00:22:04 crc kubenswrapper[4825]: > Feb 19 00:22:13 crc kubenswrapper[4825]: I0219 00:22:13.279370 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:22:13 crc kubenswrapper[4825]: I0219 00:22:13.346735 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:22:15 crc kubenswrapper[4825]: I0219 00:22:15.501878 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mt2p5"] Feb 19 00:22:15 crc kubenswrapper[4825]: I0219 00:22:15.502261 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mt2p5" podUID="397c4697-38bb-47a6-914a-dcbf2be403d7" containerName="registry-server" containerID="cri-o://b0040001cfb8f1b9d58a42c542716ee91b96ee4448c25400b0861a6c6695f469" gracePeriod=2 Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.460243 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.586449 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8ng7\" (UniqueName: \"kubernetes.io/projected/397c4697-38bb-47a6-914a-dcbf2be403d7-kube-api-access-s8ng7\") pod \"397c4697-38bb-47a6-914a-dcbf2be403d7\" (UID: \"397c4697-38bb-47a6-914a-dcbf2be403d7\") " Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.586665 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397c4697-38bb-47a6-914a-dcbf2be403d7-utilities\") pod \"397c4697-38bb-47a6-914a-dcbf2be403d7\" (UID: \"397c4697-38bb-47a6-914a-dcbf2be403d7\") " Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.586693 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397c4697-38bb-47a6-914a-dcbf2be403d7-catalog-content\") pod \"397c4697-38bb-47a6-914a-dcbf2be403d7\" (UID: \"397c4697-38bb-47a6-914a-dcbf2be403d7\") " Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.589098 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/397c4697-38bb-47a6-914a-dcbf2be403d7-utilities" (OuterVolumeSpecName: "utilities") pod "397c4697-38bb-47a6-914a-dcbf2be403d7" (UID: "397c4697-38bb-47a6-914a-dcbf2be403d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.599838 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/397c4697-38bb-47a6-914a-dcbf2be403d7-kube-api-access-s8ng7" (OuterVolumeSpecName: "kube-api-access-s8ng7") pod "397c4697-38bb-47a6-914a-dcbf2be403d7" (UID: "397c4697-38bb-47a6-914a-dcbf2be403d7"). InnerVolumeSpecName "kube-api-access-s8ng7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.689297 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/397c4697-38bb-47a6-914a-dcbf2be403d7-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.689408 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8ng7\" (UniqueName: \"kubernetes.io/projected/397c4697-38bb-47a6-914a-dcbf2be403d7-kube-api-access-s8ng7\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.721433 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/397c4697-38bb-47a6-914a-dcbf2be403d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "397c4697-38bb-47a6-914a-dcbf2be403d7" (UID: "397c4697-38bb-47a6-914a-dcbf2be403d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.790520 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/397c4697-38bb-47a6-914a-dcbf2be403d7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.914823 4825 generic.go:334] "Generic (PLEG): container finished" podID="397c4697-38bb-47a6-914a-dcbf2be403d7" containerID="b0040001cfb8f1b9d58a42c542716ee91b96ee4448c25400b0861a6c6695f469" exitCode=0 Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.914871 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mt2p5" event={"ID":"397c4697-38bb-47a6-914a-dcbf2be403d7","Type":"ContainerDied","Data":"b0040001cfb8f1b9d58a42c542716ee91b96ee4448c25400b0861a6c6695f469"} Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.914905 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mt2p5" event={"ID":"397c4697-38bb-47a6-914a-dcbf2be403d7","Type":"ContainerDied","Data":"576d91456eeab2453af4a58bd0484f94f64994c50b5f9b49a2a5f5729c2bc906"} Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.914912 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mt2p5" Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.914924 4825 scope.go:117] "RemoveContainer" containerID="b0040001cfb8f1b9d58a42c542716ee91b96ee4448c25400b0861a6c6695f469" Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.950010 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mt2p5"] Feb 19 00:22:16 crc kubenswrapper[4825]: I0219 00:22:16.956284 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mt2p5"] Feb 19 00:22:17 crc kubenswrapper[4825]: I0219 00:22:17.075087 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="397c4697-38bb-47a6-914a-dcbf2be403d7" path="/var/lib/kubelet/pods/397c4697-38bb-47a6-914a-dcbf2be403d7/volumes" Feb 19 00:22:18 crc kubenswrapper[4825]: I0219 00:22:18.917017 4825 scope.go:117] "RemoveContainer" containerID="8f25abb663e89c060e0f766ca50b40ae449866be4a9b19645b49007fa1d89844" Feb 19 00:22:21 crc kubenswrapper[4825]: I0219 00:22:21.552703 4825 scope.go:117] "RemoveContainer" containerID="a078367e91103baa58fdde88d1ade6c0654b6994139d8b6be91492d2c2e1e8a3" Feb 19 00:22:21 crc kubenswrapper[4825]: I0219 00:22:21.775669 4825 scope.go:117] "RemoveContainer" containerID="b0040001cfb8f1b9d58a42c542716ee91b96ee4448c25400b0861a6c6695f469" Feb 19 00:22:21 crc kubenswrapper[4825]: E0219 00:22:21.776918 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0040001cfb8f1b9d58a42c542716ee91b96ee4448c25400b0861a6c6695f469\": container with ID starting with b0040001cfb8f1b9d58a42c542716ee91b96ee4448c25400b0861a6c6695f469 not found: ID does not exist" containerID="b0040001cfb8f1b9d58a42c542716ee91b96ee4448c25400b0861a6c6695f469" Feb 19 00:22:21 crc kubenswrapper[4825]: I0219 00:22:21.776978 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0040001cfb8f1b9d58a42c542716ee91b96ee4448c25400b0861a6c6695f469"} err="failed to get container status \"b0040001cfb8f1b9d58a42c542716ee91b96ee4448c25400b0861a6c6695f469\": rpc error: code = NotFound desc = could not find container \"b0040001cfb8f1b9d58a42c542716ee91b96ee4448c25400b0861a6c6695f469\": container with ID starting with b0040001cfb8f1b9d58a42c542716ee91b96ee4448c25400b0861a6c6695f469 not found: ID does not exist" Feb 19 00:22:21 crc kubenswrapper[4825]: I0219 00:22:21.777008 4825 scope.go:117] "RemoveContainer" containerID="8f25abb663e89c060e0f766ca50b40ae449866be4a9b19645b49007fa1d89844" Feb 19 00:22:21 crc kubenswrapper[4825]: E0219 00:22:21.777365 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f25abb663e89c060e0f766ca50b40ae449866be4a9b19645b49007fa1d89844\": container with ID starting with 8f25abb663e89c060e0f766ca50b40ae449866be4a9b19645b49007fa1d89844 not found: ID does not exist" containerID="8f25abb663e89c060e0f766ca50b40ae449866be4a9b19645b49007fa1d89844" Feb 19 00:22:21 crc kubenswrapper[4825]: I0219 00:22:21.777389 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f25abb663e89c060e0f766ca50b40ae449866be4a9b19645b49007fa1d89844"} err="failed to get container status \"8f25abb663e89c060e0f766ca50b40ae449866be4a9b19645b49007fa1d89844\": rpc error: code = NotFound desc = could not find container \"8f25abb663e89c060e0f766ca50b40ae449866be4a9b19645b49007fa1d89844\": container with ID starting with 8f25abb663e89c060e0f766ca50b40ae449866be4a9b19645b49007fa1d89844 not found: ID does not exist" Feb 19 00:22:21 crc kubenswrapper[4825]: I0219 00:22:21.777403 4825 scope.go:117] "RemoveContainer" containerID="a078367e91103baa58fdde88d1ade6c0654b6994139d8b6be91492d2c2e1e8a3" Feb 19 00:22:21 crc kubenswrapper[4825]: E0219 00:22:21.777685 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a078367e91103baa58fdde88d1ade6c0654b6994139d8b6be91492d2c2e1e8a3\": container with ID starting with a078367e91103baa58fdde88d1ade6c0654b6994139d8b6be91492d2c2e1e8a3 not found: ID does not exist" containerID="a078367e91103baa58fdde88d1ade6c0654b6994139d8b6be91492d2c2e1e8a3" Feb 19 00:22:21 crc kubenswrapper[4825]: I0219 00:22:21.777735 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a078367e91103baa58fdde88d1ade6c0654b6994139d8b6be91492d2c2e1e8a3"} err="failed to get container status \"a078367e91103baa58fdde88d1ade6c0654b6994139d8b6be91492d2c2e1e8a3\": rpc error: code = NotFound desc = could not find container \"a078367e91103baa58fdde88d1ade6c0654b6994139d8b6be91492d2c2e1e8a3\": container with ID starting with a078367e91103baa58fdde88d1ade6c0654b6994139d8b6be91492d2c2e1e8a3 not found: ID does not exist" Feb 19 00:22:26 crc kubenswrapper[4825]: E0219 00:22:26.337357 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/service-telemetry-operator:latest" Feb 19 00:22:26 crc kubenswrapper[4825]: E0219 00:22:26.338175 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/service-telemetry-operator:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:service-telemetry-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_WEBHOOK_SNMP_IMAGE,Value:quay.io/infrawatch/prometheus-webhook-snmp:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_IMAGE,Value:quay.io/prometheus/prometheus:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER_IMAGE,Value:quay.io/prometheus/alertmanager:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:service-telemetry-operator.v1.5.1768085182,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9f52w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-telemetry-operator-55b89ddfb9-9hhqn_service-telemetry(a9c56506-cd68-419c-9a93-a5c23dc0bc86): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 00:22:26 crc kubenswrapper[4825]: E0219 00:22:26.339539 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/service-telemetry-operator-55b89ddfb9-9hhqn" podUID="a9c56506-cd68-419c-9a93-a5c23dc0bc86" Feb 19 00:22:27 crc kubenswrapper[4825]: E0219 00:22:27.019575 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/service-telemetry-operator:latest\\\"\"" pod="service-telemetry/service-telemetry-operator-55b89ddfb9-9hhqn" podUID="a9c56506-cd68-419c-9a93-a5c23dc0bc86" Feb 19 00:22:30 crc kubenswrapper[4825]: I0219 00:22:30.041356 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bbbc889bc-dk5kr" event={"ID":"8e4a26b2-2aff-4606-a21e-b8bb948103ca","Type":"ContainerStarted","Data":"37a459c7efefd46b0d2c0b0de4fae261f624f1b05a4f06cec4f81f72045fef2a"} Feb 19 00:22:30 crc kubenswrapper[4825]: I0219 00:22:30.045275 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-7nntl" event={"ID":"d7368f66-3397-499a-bb7c-b0bcb7f6e919","Type":"ContainerStarted","Data":"8139e006f9d4ebd32fd22e290fe7829b3c64d0f8a1c14973102dd130e3c16111"} Feb 19 00:22:30 crc kubenswrapper[4825]: I0219 00:22:30.059763 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-bbbc889bc-dk5kr" podStartSLOduration=2.136564855 podStartE2EDuration="28.059742736s" podCreationTimestamp="2026-02-19 00:22:02 +0000 UTC" firstStartedPulling="2026-02-19 00:22:03.565589129 +0000 UTC m=+869.256555176" lastFinishedPulling="2026-02-19 00:22:29.48876701 +0000 UTC m=+895.179733057" observedRunningTime="2026-02-19 00:22:30.058699507 +0000 UTC m=+895.749665554" watchObservedRunningTime="2026-02-19 00:22:30.059742736 +0000 UTC m=+895.750708783" Feb 19 00:22:38 crc kubenswrapper[4825]: I0219 00:22:38.096136 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-7nntl" podStartSLOduration=18.173693842 podStartE2EDuration="38.096105705s" podCreationTimestamp="2026-02-19 00:22:00 +0000 UTC" firstStartedPulling="2026-02-19 00:22:01.697178158 +0000 UTC m=+867.388144205" lastFinishedPulling="2026-02-19 00:22:21.619590021 +0000 UTC m=+887.310556068" observedRunningTime="2026-02-19 00:22:30.075344231 +0000 UTC m=+895.766310278" watchObservedRunningTime="2026-02-19 00:22:38.096105705 +0000 UTC m=+903.787071782" Feb 19 00:22:39 crc kubenswrapper[4825]: I0219 00:22:39.124849 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-55b89ddfb9-9hhqn" event={"ID":"a9c56506-cd68-419c-9a93-a5c23dc0bc86","Type":"ContainerStarted","Data":"f7c9c6209a3bd22a8c17168b6a9b732afdda0ccb9726b2eea0efaab28e37e0b7"} Feb 19 00:22:39 crc kubenswrapper[4825]: I0219 00:22:39.146212 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-55b89ddfb9-9hhqn" podStartSLOduration=2.382356759 podStartE2EDuration="41.146186714s" podCreationTimestamp="2026-02-19 00:21:58 +0000 UTC" firstStartedPulling="2026-02-19 00:21:59.804426239 +0000 UTC m=+865.495392286" lastFinishedPulling="2026-02-19 00:22:38.568256194 +0000 UTC m=+904.259222241" observedRunningTime="2026-02-19 00:22:39.143257986 +0000 UTC m=+904.834224133" watchObservedRunningTime="2026-02-19 00:22:39.146186714 +0000 UTC m=+904.837152761" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.321225 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-77v4m"] Feb 19 00:22:46 crc kubenswrapper[4825]: E0219 00:22:46.322703 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397c4697-38bb-47a6-914a-dcbf2be403d7" containerName="extract-utilities" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.322724 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="397c4697-38bb-47a6-914a-dcbf2be403d7" containerName="extract-utilities" Feb 19 00:22:46 crc kubenswrapper[4825]: E0219 00:22:46.322761 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397c4697-38bb-47a6-914a-dcbf2be403d7" containerName="registry-server" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.322772 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="397c4697-38bb-47a6-914a-dcbf2be403d7" containerName="registry-server" Feb 19 00:22:46 crc kubenswrapper[4825]: E0219 00:22:46.322807 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="397c4697-38bb-47a6-914a-dcbf2be403d7" containerName="extract-content" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.322817 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="397c4697-38bb-47a6-914a-dcbf2be403d7" containerName="extract-content" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.323498 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="397c4697-38bb-47a6-914a-dcbf2be403d7" containerName="registry-server" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.327186 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.333131 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-77v4m"] Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.463041 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbbg9\" (UniqueName: \"kubernetes.io/projected/b0d34c95-0c24-4366-ab68-7d233f99cad9-kube-api-access-dbbg9\") pod \"certified-operators-77v4m\" (UID: \"b0d34c95-0c24-4366-ab68-7d233f99cad9\") " pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.463580 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d34c95-0c24-4366-ab68-7d233f99cad9-catalog-content\") pod \"certified-operators-77v4m\" (UID: \"b0d34c95-0c24-4366-ab68-7d233f99cad9\") " pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.463709 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d34c95-0c24-4366-ab68-7d233f99cad9-utilities\") pod \"certified-operators-77v4m\" (UID: \"b0d34c95-0c24-4366-ab68-7d233f99cad9\") " pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.564654 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d34c95-0c24-4366-ab68-7d233f99cad9-catalog-content\") pod \"certified-operators-77v4m\" (UID: \"b0d34c95-0c24-4366-ab68-7d233f99cad9\") " pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.564748 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d34c95-0c24-4366-ab68-7d233f99cad9-utilities\") pod \"certified-operators-77v4m\" (UID: \"b0d34c95-0c24-4366-ab68-7d233f99cad9\") " pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.564821 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbbg9\" (UniqueName: \"kubernetes.io/projected/b0d34c95-0c24-4366-ab68-7d233f99cad9-kube-api-access-dbbg9\") pod \"certified-operators-77v4m\" (UID: \"b0d34c95-0c24-4366-ab68-7d233f99cad9\") " pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.565331 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d34c95-0c24-4366-ab68-7d233f99cad9-utilities\") pod \"certified-operators-77v4m\" (UID: \"b0d34c95-0c24-4366-ab68-7d233f99cad9\") " pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.565466 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d34c95-0c24-4366-ab68-7d233f99cad9-catalog-content\") pod \"certified-operators-77v4m\" (UID: \"b0d34c95-0c24-4366-ab68-7d233f99cad9\") " pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.591586 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbbg9\" (UniqueName: \"kubernetes.io/projected/b0d34c95-0c24-4366-ab68-7d233f99cad9-kube-api-access-dbbg9\") pod \"certified-operators-77v4m\" (UID: \"b0d34c95-0c24-4366-ab68-7d233f99cad9\") " pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.648199 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:22:46 crc kubenswrapper[4825]: I0219 00:22:46.893308 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-77v4m"] Feb 19 00:22:47 crc kubenswrapper[4825]: I0219 00:22:47.179272 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77v4m" event={"ID":"b0d34c95-0c24-4366-ab68-7d233f99cad9","Type":"ContainerStarted","Data":"cc117ee4eff1c6470018f6e4ffe7fc517f1dd7ac3c8eb0c471dc569d06225c60"} Feb 19 00:22:49 crc kubenswrapper[4825]: I0219 00:22:49.195638 4825 generic.go:334] "Generic (PLEG): container finished" podID="b0d34c95-0c24-4366-ab68-7d233f99cad9" containerID="3f1ec29225a4d7646525edfeb55c535ce8d917bb384eea4af394db6e51d3726c" exitCode=0 Feb 19 00:22:49 crc kubenswrapper[4825]: I0219 00:22:49.195742 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77v4m" event={"ID":"b0d34c95-0c24-4366-ab68-7d233f99cad9","Type":"ContainerDied","Data":"3f1ec29225a4d7646525edfeb55c535ce8d917bb384eea4af394db6e51d3726c"} Feb 19 00:22:51 crc kubenswrapper[4825]: I0219 00:22:51.230408 4825 generic.go:334] "Generic (PLEG): container finished" podID="b0d34c95-0c24-4366-ab68-7d233f99cad9" containerID="f3ade4390ab85cb7dd13139aace99411857c45ba18de00a1b04e7f8554b973ad" exitCode=0 Feb 19 00:22:51 crc kubenswrapper[4825]: I0219 00:22:51.230468 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77v4m" event={"ID":"b0d34c95-0c24-4366-ab68-7d233f99cad9","Type":"ContainerDied","Data":"f3ade4390ab85cb7dd13139aace99411857c45ba18de00a1b04e7f8554b973ad"} Feb 19 00:22:53 crc kubenswrapper[4825]: I0219 00:22:53.247391 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77v4m" event={"ID":"b0d34c95-0c24-4366-ab68-7d233f99cad9","Type":"ContainerStarted","Data":"bc2b3d497a93ed7960ac3436253415641ebe5d45c078b6362d1300ce26957b73"} Feb 19 00:22:53 crc kubenswrapper[4825]: I0219 00:22:53.272021 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-77v4m" podStartSLOduration=3.617398467 podStartE2EDuration="7.271993547s" podCreationTimestamp="2026-02-19 00:22:46 +0000 UTC" firstStartedPulling="2026-02-19 00:22:49.197294435 +0000 UTC m=+914.888260482" lastFinishedPulling="2026-02-19 00:22:52.851889515 +0000 UTC m=+918.542855562" observedRunningTime="2026-02-19 00:22:53.269556062 +0000 UTC m=+918.960522129" watchObservedRunningTime="2026-02-19 00:22:53.271993547 +0000 UTC m=+918.962959594" Feb 19 00:22:56 crc kubenswrapper[4825]: I0219 00:22:56.500306 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sck8w"] Feb 19 00:22:56 crc kubenswrapper[4825]: I0219 00:22:56.502563 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:22:56 crc kubenswrapper[4825]: I0219 00:22:56.521754 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sck8w"] Feb 19 00:22:56 crc kubenswrapper[4825]: I0219 00:22:56.526911 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e901da44-719b-4de0-b91b-b8242648eac4-utilities\") pod \"community-operators-sck8w\" (UID: \"e901da44-719b-4de0-b91b-b8242648eac4\") " pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:22:56 crc kubenswrapper[4825]: I0219 00:22:56.526953 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e901da44-719b-4de0-b91b-b8242648eac4-catalog-content\") pod \"community-operators-sck8w\" (UID: \"e901da44-719b-4de0-b91b-b8242648eac4\") " pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:22:56 crc kubenswrapper[4825]: I0219 00:22:56.527013 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k8rf\" (UniqueName: \"kubernetes.io/projected/e901da44-719b-4de0-b91b-b8242648eac4-kube-api-access-4k8rf\") pod \"community-operators-sck8w\" (UID: \"e901da44-719b-4de0-b91b-b8242648eac4\") " pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:22:56 crc kubenswrapper[4825]: I0219 00:22:56.629225 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e901da44-719b-4de0-b91b-b8242648eac4-utilities\") pod \"community-operators-sck8w\" (UID: \"e901da44-719b-4de0-b91b-b8242648eac4\") " pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:22:56 crc kubenswrapper[4825]: I0219 00:22:56.629300 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e901da44-719b-4de0-b91b-b8242648eac4-catalog-content\") pod \"community-operators-sck8w\" (UID: \"e901da44-719b-4de0-b91b-b8242648eac4\") " pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:22:56 crc kubenswrapper[4825]: I0219 00:22:56.629372 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k8rf\" (UniqueName: \"kubernetes.io/projected/e901da44-719b-4de0-b91b-b8242648eac4-kube-api-access-4k8rf\") pod \"community-operators-sck8w\" (UID: \"e901da44-719b-4de0-b91b-b8242648eac4\") " pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:22:56 crc kubenswrapper[4825]: I0219 00:22:56.629998 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e901da44-719b-4de0-b91b-b8242648eac4-utilities\") pod \"community-operators-sck8w\" (UID: \"e901da44-719b-4de0-b91b-b8242648eac4\") " pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:22:56 crc kubenswrapper[4825]: I0219 00:22:56.630046 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e901da44-719b-4de0-b91b-b8242648eac4-catalog-content\") pod \"community-operators-sck8w\" (UID: \"e901da44-719b-4de0-b91b-b8242648eac4\") " pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:22:56 crc kubenswrapper[4825]: I0219 00:22:56.648926 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:22:56 crc kubenswrapper[4825]: I0219 00:22:56.649134 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:22:56 crc kubenswrapper[4825]: I0219 00:22:56.663818 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k8rf\" (UniqueName: \"kubernetes.io/projected/e901da44-719b-4de0-b91b-b8242648eac4-kube-api-access-4k8rf\") pod \"community-operators-sck8w\" (UID: \"e901da44-719b-4de0-b91b-b8242648eac4\") " pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:22:56 crc kubenswrapper[4825]: I0219 00:22:56.701054 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:22:56 crc kubenswrapper[4825]: I0219 00:22:56.827728 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:22:57 crc kubenswrapper[4825]: I0219 00:22:57.164339 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sck8w"] Feb 19 00:22:57 crc kubenswrapper[4825]: I0219 00:22:57.286064 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sck8w" event={"ID":"e901da44-719b-4de0-b91b-b8242648eac4","Type":"ContainerStarted","Data":"14cfb02b1e6ad027df1d522a6b61b168fa68a1614cad47c50a81ff9f91c2b599"} Feb 19 00:22:58 crc kubenswrapper[4825]: I0219 00:22:58.356706 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:22:59 crc kubenswrapper[4825]: I0219 00:22:59.321464 4825 generic.go:334] "Generic (PLEG): container finished" podID="e901da44-719b-4de0-b91b-b8242648eac4" containerID="dd454f6ab75b4487d91026eefe6c42f5a0204e8e8e103b3e255595461dc25588" exitCode=0 Feb 19 00:22:59 crc kubenswrapper[4825]: I0219 00:22:59.321758 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sck8w" event={"ID":"e901da44-719b-4de0-b91b-b8242648eac4","Type":"ContainerDied","Data":"dd454f6ab75b4487d91026eefe6c42f5a0204e8e8e103b3e255595461dc25588"} Feb 19 00:23:01 crc kubenswrapper[4825]: I0219 00:23:01.343543 4825 generic.go:334] "Generic (PLEG): container finished" podID="e901da44-719b-4de0-b91b-b8242648eac4" containerID="c4529c4d97a8fc8e361dc418963a14db30fd17a012f12e933f8dfef2b4d5ffa0" exitCode=0 Feb 19 00:23:01 crc kubenswrapper[4825]: I0219 00:23:01.343651 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sck8w" event={"ID":"e901da44-719b-4de0-b91b-b8242648eac4","Type":"ContainerDied","Data":"c4529c4d97a8fc8e361dc418963a14db30fd17a012f12e933f8dfef2b4d5ffa0"} Feb 19 00:23:02 crc kubenswrapper[4825]: I0219 00:23:02.095174 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-77v4m"] Feb 19 00:23:02 crc kubenswrapper[4825]: I0219 00:23:02.095905 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-77v4m" podUID="b0d34c95-0c24-4366-ab68-7d233f99cad9" containerName="registry-server" containerID="cri-o://bc2b3d497a93ed7960ac3436253415641ebe5d45c078b6362d1300ce26957b73" gracePeriod=2 Feb 19 00:23:02 crc kubenswrapper[4825]: I0219 00:23:02.364256 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sck8w" event={"ID":"e901da44-719b-4de0-b91b-b8242648eac4","Type":"ContainerStarted","Data":"c557828ab7cc7bb2ce35b2fe0117804078c6eaf89805dcb343471e633824c581"} Feb 19 00:23:02 crc kubenswrapper[4825]: I0219 00:23:02.368270 4825 generic.go:334] "Generic (PLEG): container finished" podID="b0d34c95-0c24-4366-ab68-7d233f99cad9" containerID="bc2b3d497a93ed7960ac3436253415641ebe5d45c078b6362d1300ce26957b73" exitCode=0 Feb 19 00:23:02 crc kubenswrapper[4825]: I0219 00:23:02.368312 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77v4m" event={"ID":"b0d34c95-0c24-4366-ab68-7d233f99cad9","Type":"ContainerDied","Data":"bc2b3d497a93ed7960ac3436253415641ebe5d45c078b6362d1300ce26957b73"} Feb 19 00:23:02 crc kubenswrapper[4825]: I0219 00:23:02.391143 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sck8w" podStartSLOduration=3.972630388 podStartE2EDuration="6.391114456s" podCreationTimestamp="2026-02-19 00:22:56 +0000 UTC" firstStartedPulling="2026-02-19 00:22:59.325436411 +0000 UTC m=+925.016402498" lastFinishedPulling="2026-02-19 00:23:01.743920519 +0000 UTC m=+927.434886566" observedRunningTime="2026-02-19 00:23:02.385649331 +0000 UTC m=+928.076615388" watchObservedRunningTime="2026-02-19 00:23:02.391114456 +0000 UTC m=+928.082080503" Feb 19 00:23:02 crc kubenswrapper[4825]: I0219 00:23:02.470919 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:23:02 crc kubenswrapper[4825]: I0219 00:23:02.526681 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d34c95-0c24-4366-ab68-7d233f99cad9-utilities\") pod \"b0d34c95-0c24-4366-ab68-7d233f99cad9\" (UID: \"b0d34c95-0c24-4366-ab68-7d233f99cad9\") " Feb 19 00:23:02 crc kubenswrapper[4825]: I0219 00:23:02.526786 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbbg9\" (UniqueName: \"kubernetes.io/projected/b0d34c95-0c24-4366-ab68-7d233f99cad9-kube-api-access-dbbg9\") pod \"b0d34c95-0c24-4366-ab68-7d233f99cad9\" (UID: \"b0d34c95-0c24-4366-ab68-7d233f99cad9\") " Feb 19 00:23:02 crc kubenswrapper[4825]: I0219 00:23:02.526858 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d34c95-0c24-4366-ab68-7d233f99cad9-catalog-content\") pod \"b0d34c95-0c24-4366-ab68-7d233f99cad9\" (UID: \"b0d34c95-0c24-4366-ab68-7d233f99cad9\") " Feb 19 00:23:02 crc kubenswrapper[4825]: I0219 00:23:02.539788 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d34c95-0c24-4366-ab68-7d233f99cad9-kube-api-access-dbbg9" (OuterVolumeSpecName: "kube-api-access-dbbg9") pod "b0d34c95-0c24-4366-ab68-7d233f99cad9" (UID: "b0d34c95-0c24-4366-ab68-7d233f99cad9"). InnerVolumeSpecName "kube-api-access-dbbg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:23:02 crc kubenswrapper[4825]: I0219 00:23:02.540635 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d34c95-0c24-4366-ab68-7d233f99cad9-utilities" (OuterVolumeSpecName: "utilities") pod "b0d34c95-0c24-4366-ab68-7d233f99cad9" (UID: "b0d34c95-0c24-4366-ab68-7d233f99cad9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:23:02 crc kubenswrapper[4825]: I0219 00:23:02.590650 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d34c95-0c24-4366-ab68-7d233f99cad9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0d34c95-0c24-4366-ab68-7d233f99cad9" (UID: "b0d34c95-0c24-4366-ab68-7d233f99cad9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:23:02 crc kubenswrapper[4825]: I0219 00:23:02.628997 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d34c95-0c24-4366-ab68-7d233f99cad9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:02 crc kubenswrapper[4825]: I0219 00:23:02.629041 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d34c95-0c24-4366-ab68-7d233f99cad9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:02 crc kubenswrapper[4825]: I0219 00:23:02.629052 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbbg9\" (UniqueName: \"kubernetes.io/projected/b0d34c95-0c24-4366-ab68-7d233f99cad9-kube-api-access-dbbg9\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:03 crc kubenswrapper[4825]: I0219 00:23:03.381619 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77v4m" event={"ID":"b0d34c95-0c24-4366-ab68-7d233f99cad9","Type":"ContainerDied","Data":"cc117ee4eff1c6470018f6e4ffe7fc517f1dd7ac3c8eb0c471dc569d06225c60"} Feb 19 00:23:03 crc kubenswrapper[4825]: I0219 00:23:03.381723 4825 scope.go:117] "RemoveContainer" containerID="bc2b3d497a93ed7960ac3436253415641ebe5d45c078b6362d1300ce26957b73" Feb 19 00:23:03 crc kubenswrapper[4825]: I0219 00:23:03.381783 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77v4m" Feb 19 00:23:03 crc kubenswrapper[4825]: I0219 00:23:03.402527 4825 scope.go:117] "RemoveContainer" containerID="f3ade4390ab85cb7dd13139aace99411857c45ba18de00a1b04e7f8554b973ad" Feb 19 00:23:03 crc kubenswrapper[4825]: I0219 00:23:03.407753 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-77v4m"] Feb 19 00:23:03 crc kubenswrapper[4825]: I0219 00:23:03.417423 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-77v4m"] Feb 19 00:23:03 crc kubenswrapper[4825]: I0219 00:23:03.420769 4825 scope.go:117] "RemoveContainer" containerID="3f1ec29225a4d7646525edfeb55c535ce8d917bb384eea4af394db6e51d3726c" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.079685 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d34c95-0c24-4366-ab68-7d233f99cad9" path="/var/lib/kubelet/pods/b0d34c95-0c24-4366-ab68-7d233f99cad9/volumes" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.739058 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-2ldkb"] Feb 19 00:23:05 crc kubenswrapper[4825]: E0219 00:23:05.739378 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d34c95-0c24-4366-ab68-7d233f99cad9" containerName="extract-utilities" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.739404 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d34c95-0c24-4366-ab68-7d233f99cad9" containerName="extract-utilities" Feb 19 00:23:05 crc kubenswrapper[4825]: E0219 00:23:05.739436 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d34c95-0c24-4366-ab68-7d233f99cad9" containerName="registry-server" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.739442 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d34c95-0c24-4366-ab68-7d233f99cad9" containerName="registry-server" Feb 19 00:23:05 crc kubenswrapper[4825]: E0219 00:23:05.739455 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d34c95-0c24-4366-ab68-7d233f99cad9" containerName="extract-content" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.739462 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d34c95-0c24-4366-ab68-7d233f99cad9" containerName="extract-content" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.739600 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d34c95-0c24-4366-ab68-7d233f99cad9" containerName="registry-server" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.740213 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.745397 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.746206 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.746262 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-65z7l" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.746363 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.746561 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.748049 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.776110 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-2ldkb"] Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.776287 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.783776 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.783844 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fzrp\" (UniqueName: \"kubernetes.io/projected/c5872ac9-2a95-4315-b1c9-f31ad523fc98-kube-api-access-6fzrp\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.783888 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c5872ac9-2a95-4315-b1c9-f31ad523fc98-sasl-config\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.783913 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.783950 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.783976 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.783999 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-sasl-users\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.886090 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fzrp\" (UniqueName: \"kubernetes.io/projected/c5872ac9-2a95-4315-b1c9-f31ad523fc98-kube-api-access-6fzrp\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.886166 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c5872ac9-2a95-4315-b1c9-f31ad523fc98-sasl-config\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.886212 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.886257 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.886293 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.886322 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-sasl-users\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.886346 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.888302 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c5872ac9-2a95-4315-b1c9-f31ad523fc98-sasl-config\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.894440 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.895349 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.896048 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.896984 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.898639 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-sasl-users\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:05 crc kubenswrapper[4825]: I0219 00:23:05.907801 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fzrp\" (UniqueName: \"kubernetes.io/projected/c5872ac9-2a95-4315-b1c9-f31ad523fc98-kube-api-access-6fzrp\") pod \"default-interconnect-68864d46cb-2ldkb\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:06 crc kubenswrapper[4825]: I0219 00:23:06.066053 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:23:06 crc kubenswrapper[4825]: W0219 00:23:06.327601 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5872ac9_2a95_4315_b1c9_f31ad523fc98.slice/crio-991c03b139436a831ea139ff7019689348dd9000281523a3790e05c1d1d1fb58 WatchSource:0}: Error finding container 991c03b139436a831ea139ff7019689348dd9000281523a3790e05c1d1d1fb58: Status 404 returned error can't find the container with id 991c03b139436a831ea139ff7019689348dd9000281523a3790e05c1d1d1fb58 Feb 19 00:23:06 crc kubenswrapper[4825]: I0219 00:23:06.320534 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-2ldkb"] Feb 19 00:23:06 crc kubenswrapper[4825]: I0219 00:23:06.404301 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" event={"ID":"c5872ac9-2a95-4315-b1c9-f31ad523fc98","Type":"ContainerStarted","Data":"991c03b139436a831ea139ff7019689348dd9000281523a3790e05c1d1d1fb58"} Feb 19 00:23:06 crc kubenswrapper[4825]: I0219 00:23:06.828583 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:23:06 crc kubenswrapper[4825]: I0219 00:23:06.829083 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:23:06 crc kubenswrapper[4825]: I0219 00:23:06.878878 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:23:07 crc kubenswrapper[4825]: I0219 00:23:07.479610 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:23:09 crc kubenswrapper[4825]: I0219 00:23:09.698538 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sck8w"] Feb 19 00:23:09 crc kubenswrapper[4825]: I0219 00:23:09.699352 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sck8w" podUID="e901da44-719b-4de0-b91b-b8242648eac4" containerName="registry-server" containerID="cri-o://c557828ab7cc7bb2ce35b2fe0117804078c6eaf89805dcb343471e633824c581" gracePeriod=2 Feb 19 00:23:10 crc kubenswrapper[4825]: I0219 00:23:10.451575 4825 generic.go:334] "Generic (PLEG): container finished" podID="e901da44-719b-4de0-b91b-b8242648eac4" containerID="c557828ab7cc7bb2ce35b2fe0117804078c6eaf89805dcb343471e633824c581" exitCode=0 Feb 19 00:23:10 crc kubenswrapper[4825]: I0219 00:23:10.452047 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sck8w" event={"ID":"e901da44-719b-4de0-b91b-b8242648eac4","Type":"ContainerDied","Data":"c557828ab7cc7bb2ce35b2fe0117804078c6eaf89805dcb343471e633824c581"} Feb 19 00:23:11 crc kubenswrapper[4825]: I0219 00:23:11.860044 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:23:11 crc kubenswrapper[4825]: I0219 00:23:11.898860 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e901da44-719b-4de0-b91b-b8242648eac4-catalog-content\") pod \"e901da44-719b-4de0-b91b-b8242648eac4\" (UID: \"e901da44-719b-4de0-b91b-b8242648eac4\") " Feb 19 00:23:11 crc kubenswrapper[4825]: I0219 00:23:11.900485 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e901da44-719b-4de0-b91b-b8242648eac4-utilities\") pod \"e901da44-719b-4de0-b91b-b8242648eac4\" (UID: \"e901da44-719b-4de0-b91b-b8242648eac4\") " Feb 19 00:23:11 crc kubenswrapper[4825]: I0219 00:23:11.900709 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k8rf\" (UniqueName: \"kubernetes.io/projected/e901da44-719b-4de0-b91b-b8242648eac4-kube-api-access-4k8rf\") pod \"e901da44-719b-4de0-b91b-b8242648eac4\" (UID: \"e901da44-719b-4de0-b91b-b8242648eac4\") " Feb 19 00:23:11 crc kubenswrapper[4825]: I0219 00:23:11.902965 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e901da44-719b-4de0-b91b-b8242648eac4-utilities" (OuterVolumeSpecName: "utilities") pod "e901da44-719b-4de0-b91b-b8242648eac4" (UID: "e901da44-719b-4de0-b91b-b8242648eac4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:23:11 crc kubenswrapper[4825]: I0219 00:23:11.914856 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e901da44-719b-4de0-b91b-b8242648eac4-kube-api-access-4k8rf" (OuterVolumeSpecName: "kube-api-access-4k8rf") pod "e901da44-719b-4de0-b91b-b8242648eac4" (UID: "e901da44-719b-4de0-b91b-b8242648eac4"). InnerVolumeSpecName "kube-api-access-4k8rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:23:11 crc kubenswrapper[4825]: I0219 00:23:11.965090 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e901da44-719b-4de0-b91b-b8242648eac4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e901da44-719b-4de0-b91b-b8242648eac4" (UID: "e901da44-719b-4de0-b91b-b8242648eac4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:23:12 crc kubenswrapper[4825]: I0219 00:23:12.002746 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k8rf\" (UniqueName: \"kubernetes.io/projected/e901da44-719b-4de0-b91b-b8242648eac4-kube-api-access-4k8rf\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:12 crc kubenswrapper[4825]: I0219 00:23:12.002814 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e901da44-719b-4de0-b91b-b8242648eac4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:12 crc kubenswrapper[4825]: I0219 00:23:12.002828 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e901da44-719b-4de0-b91b-b8242648eac4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:12 crc kubenswrapper[4825]: I0219 00:23:12.471230 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sck8w" event={"ID":"e901da44-719b-4de0-b91b-b8242648eac4","Type":"ContainerDied","Data":"14cfb02b1e6ad027df1d522a6b61b168fa68a1614cad47c50a81ff9f91c2b599"} Feb 19 00:23:12 crc kubenswrapper[4825]: I0219 00:23:12.471729 4825 scope.go:117] "RemoveContainer" containerID="c557828ab7cc7bb2ce35b2fe0117804078c6eaf89805dcb343471e633824c581" Feb 19 00:23:12 crc kubenswrapper[4825]: I0219 00:23:12.471901 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sck8w" Feb 19 00:23:12 crc kubenswrapper[4825]: I0219 00:23:12.475902 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" event={"ID":"c5872ac9-2a95-4315-b1c9-f31ad523fc98","Type":"ContainerStarted","Data":"6439ced09639960b33b272aab27055f8cf08da9a79ea4c73b79d66caeff16877"} Feb 19 00:23:12 crc kubenswrapper[4825]: I0219 00:23:12.495582 4825 scope.go:117] "RemoveContainer" containerID="c4529c4d97a8fc8e361dc418963a14db30fd17a012f12e933f8dfef2b4d5ffa0" Feb 19 00:23:12 crc kubenswrapper[4825]: I0219 00:23:12.510966 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" podStartSLOduration=2.087887374 podStartE2EDuration="7.510943619s" podCreationTimestamp="2026-02-19 00:23:05 +0000 UTC" firstStartedPulling="2026-02-19 00:23:06.330129649 +0000 UTC m=+932.021095696" lastFinishedPulling="2026-02-19 00:23:11.753185894 +0000 UTC m=+937.444151941" observedRunningTime="2026-02-19 00:23:12.501031686 +0000 UTC m=+938.191997733" watchObservedRunningTime="2026-02-19 00:23:12.510943619 +0000 UTC m=+938.201909666" Feb 19 00:23:12 crc kubenswrapper[4825]: I0219 00:23:12.529084 4825 scope.go:117] "RemoveContainer" containerID="dd454f6ab75b4487d91026eefe6c42f5a0204e8e8e103b3e255595461dc25588" Feb 19 00:23:12 crc kubenswrapper[4825]: I0219 00:23:12.544596 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sck8w"] Feb 19 00:23:12 crc kubenswrapper[4825]: I0219 00:23:12.557686 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sck8w"] Feb 19 00:23:13 crc kubenswrapper[4825]: I0219 00:23:13.073464 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e901da44-719b-4de0-b91b-b8242648eac4" path="/var/lib/kubelet/pods/e901da44-719b-4de0-b91b-b8242648eac4/volumes" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.802078 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 19 00:23:16 crc kubenswrapper[4825]: E0219 00:23:16.803286 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e901da44-719b-4de0-b91b-b8242648eac4" containerName="registry-server" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.803325 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="e901da44-719b-4de0-b91b-b8242648eac4" containerName="registry-server" Feb 19 00:23:16 crc kubenswrapper[4825]: E0219 00:23:16.803381 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e901da44-719b-4de0-b91b-b8242648eac4" containerName="extract-content" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.803394 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="e901da44-719b-4de0-b91b-b8242648eac4" containerName="extract-content" Feb 19 00:23:16 crc kubenswrapper[4825]: E0219 00:23:16.803428 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e901da44-719b-4de0-b91b-b8242648eac4" containerName="extract-utilities" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.803443 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="e901da44-719b-4de0-b91b-b8242648eac4" containerName="extract-utilities" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.803861 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="e901da44-719b-4de0-b91b-b8242648eac4" containerName="registry-server" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.806748 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.809887 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.809924 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-2" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.810177 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-1" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.810233 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.810460 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-rwrsf" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.810636 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.814594 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.815596 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.816434 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.817362 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.822105 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.878557 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-tls-assets\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.879045 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-web-config\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.879241 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.879346 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.879372 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-config\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.879403 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-config-out\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.879429 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.879647 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.879728 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.879804 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.879842 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dbdb9f6f-af9d-4777-b199-8e3031d545f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dbdb9f6f-af9d-4777-b199-8e3031d545f4\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.880032 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9xh5\" (UniqueName: \"kubernetes.io/projected/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-kube-api-access-c9xh5\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.981937 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-web-config\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.982200 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.982280 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.982354 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-config\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.982441 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-config-out\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.982493 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.982600 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.982657 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.982738 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.982814 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dbdb9f6f-af9d-4777-b199-8e3031d545f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dbdb9f6f-af9d-4777-b199-8e3031d545f4\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.982925 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9xh5\" (UniqueName: \"kubernetes.io/projected/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-kube-api-access-c9xh5\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.982986 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-tls-assets\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: E0219 00:23:16.984306 4825 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 19 00:23:16 crc kubenswrapper[4825]: E0219 00:23:16.984419 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-secret-default-prometheus-proxy-tls podName:1a50e579-ea38-4bf0-bfd6-805ef1a6be97 nodeName:}" failed. No retries permitted until 2026-02-19 00:23:17.484385883 +0000 UTC m=+943.175352110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "1a50e579-ea38-4bf0-bfd6-805ef1a6be97") : secret "default-prometheus-proxy-tls" not found Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.984952 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.984998 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.985131 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.985252 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.987204 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.987235 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dbdb9f6f-af9d-4777-b199-8e3031d545f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dbdb9f6f-af9d-4777-b199-8e3031d545f4\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e7039253da4e93969ea054e70ca41b16a7d9969d69412426242a983e752ea421/globalmount\"" pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.990544 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-web-config\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.990786 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-tls-assets\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.991059 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.991152 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-config\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:16 crc kubenswrapper[4825]: I0219 00:23:16.991806 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-config-out\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:17 crc kubenswrapper[4825]: I0219 00:23:17.004053 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9xh5\" (UniqueName: \"kubernetes.io/projected/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-kube-api-access-c9xh5\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:17 crc kubenswrapper[4825]: I0219 00:23:17.014555 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dbdb9f6f-af9d-4777-b199-8e3031d545f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dbdb9f6f-af9d-4777-b199-8e3031d545f4\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:17 crc kubenswrapper[4825]: I0219 00:23:17.492480 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:17 crc kubenswrapper[4825]: E0219 00:23:17.492772 4825 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 19 00:23:17 crc kubenswrapper[4825]: E0219 00:23:17.492922 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-secret-default-prometheus-proxy-tls podName:1a50e579-ea38-4bf0-bfd6-805ef1a6be97 nodeName:}" failed. No retries permitted until 2026-02-19 00:23:18.492884372 +0000 UTC m=+944.183850419 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "1a50e579-ea38-4bf0-bfd6-805ef1a6be97") : secret "default-prometheus-proxy-tls" not found Feb 19 00:23:18 crc kubenswrapper[4825]: I0219 00:23:18.519142 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:18 crc kubenswrapper[4825]: I0219 00:23:18.523882 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a50e579-ea38-4bf0-bfd6-805ef1a6be97-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"1a50e579-ea38-4bf0-bfd6-805ef1a6be97\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:23:18 crc kubenswrapper[4825]: I0219 00:23:18.635200 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 19 00:23:18 crc kubenswrapper[4825]: I0219 00:23:18.891447 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 19 00:23:19 crc kubenswrapper[4825]: I0219 00:23:19.564951 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1a50e579-ea38-4bf0-bfd6-805ef1a6be97","Type":"ContainerStarted","Data":"54ae0b0121a30e9501440b049524f8ec4e27cdfdafcfa4c087ba9fe42765d505"} Feb 19 00:23:23 crc kubenswrapper[4825]: I0219 00:23:23.598444 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1a50e579-ea38-4bf0-bfd6-805ef1a6be97","Type":"ContainerStarted","Data":"6a8fdb70273f596b3829fb6428478878ccba7bc1bef5193e4b72516ef4d49134"} Feb 19 00:23:27 crc kubenswrapper[4825]: I0219 00:23:27.621129 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-78bcbbdcff-c2z9k"] Feb 19 00:23:27 crc kubenswrapper[4825]: I0219 00:23:27.622887 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-c2z9k" Feb 19 00:23:27 crc kubenswrapper[4825]: I0219 00:23:27.627602 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-78bcbbdcff-c2z9k"] Feb 19 00:23:27 crc kubenswrapper[4825]: I0219 00:23:27.671053 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w2l5\" (UniqueName: \"kubernetes.io/projected/b7420f29-5b9e-4801-9ad8-1ed62185e445-kube-api-access-7w2l5\") pod \"default-snmp-webhook-78bcbbdcff-c2z9k\" (UID: \"b7420f29-5b9e-4801-9ad8-1ed62185e445\") " pod="service-telemetry/default-snmp-webhook-78bcbbdcff-c2z9k" Feb 19 00:23:27 crc kubenswrapper[4825]: I0219 00:23:27.773496 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w2l5\" (UniqueName: \"kubernetes.io/projected/b7420f29-5b9e-4801-9ad8-1ed62185e445-kube-api-access-7w2l5\") pod \"default-snmp-webhook-78bcbbdcff-c2z9k\" (UID: \"b7420f29-5b9e-4801-9ad8-1ed62185e445\") " pod="service-telemetry/default-snmp-webhook-78bcbbdcff-c2z9k" Feb 19 00:23:27 crc kubenswrapper[4825]: I0219 00:23:27.801189 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w2l5\" (UniqueName: \"kubernetes.io/projected/b7420f29-5b9e-4801-9ad8-1ed62185e445-kube-api-access-7w2l5\") pod \"default-snmp-webhook-78bcbbdcff-c2z9k\" (UID: \"b7420f29-5b9e-4801-9ad8-1ed62185e445\") " pod="service-telemetry/default-snmp-webhook-78bcbbdcff-c2z9k" Feb 19 00:23:27 crc kubenswrapper[4825]: I0219 00:23:27.996633 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-c2z9k" Feb 19 00:23:28 crc kubenswrapper[4825]: I0219 00:23:28.218398 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-78bcbbdcff-c2z9k"] Feb 19 00:23:28 crc kubenswrapper[4825]: I0219 00:23:28.225012 4825 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:23:28 crc kubenswrapper[4825]: I0219 00:23:28.638541 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-c2z9k" event={"ID":"b7420f29-5b9e-4801-9ad8-1ed62185e445","Type":"ContainerStarted","Data":"f8fcd9d0b27acf9e691232c5e35a43a1d6f2120cc7c5840de2d99d6340f43be4"} Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.086616 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.089631 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.093130 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.094098 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.094158 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-qb5xf" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.094393 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.094475 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.094707 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.094750 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.134935 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4fcb8bc4-d2fe-4be4-84e9-8e40520db54a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fcb8bc4-d2fe-4be4-84e9-8e40520db54a\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.134998 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.135021 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-config-volume\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.135039 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0c751ba6-cfab-49ff-9243-e332977dfee1-config-out\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.135079 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.135104 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.135145 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0c751ba6-cfab-49ff-9243-e332977dfee1-tls-assets\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.135176 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-web-config\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.135205 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f75zc\" (UniqueName: \"kubernetes.io/projected/0c751ba6-cfab-49ff-9243-e332977dfee1-kube-api-access-f75zc\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.236365 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.236428 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0c751ba6-cfab-49ff-9243-e332977dfee1-tls-assets\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.236457 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-web-config\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.236478 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f75zc\" (UniqueName: \"kubernetes.io/projected/0c751ba6-cfab-49ff-9243-e332977dfee1-kube-api-access-f75zc\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.236534 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4fcb8bc4-d2fe-4be4-84e9-8e40520db54a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fcb8bc4-d2fe-4be4-84e9-8e40520db54a\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.236563 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.236582 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-config-volume\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.236605 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0c751ba6-cfab-49ff-9243-e332977dfee1-config-out\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.236662 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: E0219 00:23:31.236836 4825 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 19 00:23:31 crc kubenswrapper[4825]: E0219 00:23:31.236909 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-alertmanager-proxy-tls podName:0c751ba6-cfab-49ff-9243-e332977dfee1 nodeName:}" failed. No retries permitted until 2026-02-19 00:23:31.736884903 +0000 UTC m=+957.427850950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "0c751ba6-cfab-49ff-9243-e332977dfee1") : secret "default-alertmanager-proxy-tls" not found Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.244869 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-web-config\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.245091 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-config-volume\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.245776 4825 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.245817 4825 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4fcb8bc4-d2fe-4be4-84e9-8e40520db54a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fcb8bc4-d2fe-4be4-84e9-8e40520db54a\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9320c930acf92ae5fdf585dd766d96fc689e3b24bb50233a58e12a3d2b5af65e/globalmount\"" pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.253132 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0c751ba6-cfab-49ff-9243-e332977dfee1-tls-assets\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.253352 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.253245 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0c751ba6-cfab-49ff-9243-e332977dfee1-config-out\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.253619 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.256884 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f75zc\" (UniqueName: \"kubernetes.io/projected/0c751ba6-cfab-49ff-9243-e332977dfee1-kube-api-access-f75zc\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.276994 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4fcb8bc4-d2fe-4be4-84e9-8e40520db54a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4fcb8bc4-d2fe-4be4-84e9-8e40520db54a\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.671400 4825 generic.go:334] "Generic (PLEG): container finished" podID="1a50e579-ea38-4bf0-bfd6-805ef1a6be97" containerID="6a8fdb70273f596b3829fb6428478878ccba7bc1bef5193e4b72516ef4d49134" exitCode=0 Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.671457 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1a50e579-ea38-4bf0-bfd6-805ef1a6be97","Type":"ContainerDied","Data":"6a8fdb70273f596b3829fb6428478878ccba7bc1bef5193e4b72516ef4d49134"} Feb 19 00:23:31 crc kubenswrapper[4825]: I0219 00:23:31.747818 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:31 crc kubenswrapper[4825]: E0219 00:23:31.748036 4825 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 19 00:23:31 crc kubenswrapper[4825]: E0219 00:23:31.748131 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-alertmanager-proxy-tls podName:0c751ba6-cfab-49ff-9243-e332977dfee1 nodeName:}" failed. No retries permitted until 2026-02-19 00:23:32.748107063 +0000 UTC m=+958.439073110 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "0c751ba6-cfab-49ff-9243-e332977dfee1") : secret "default-alertmanager-proxy-tls" not found Feb 19 00:23:32 crc kubenswrapper[4825]: I0219 00:23:32.763417 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:32 crc kubenswrapper[4825]: E0219 00:23:32.763659 4825 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 19 00:23:32 crc kubenswrapper[4825]: E0219 00:23:32.764023 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-alertmanager-proxy-tls podName:0c751ba6-cfab-49ff-9243-e332977dfee1 nodeName:}" failed. No retries permitted until 2026-02-19 00:23:34.763994458 +0000 UTC m=+960.454960505 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "0c751ba6-cfab-49ff-9243-e332977dfee1") : secret "default-alertmanager-proxy-tls" not found Feb 19 00:23:34 crc kubenswrapper[4825]: I0219 00:23:34.806622 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:34 crc kubenswrapper[4825]: E0219 00:23:34.806852 4825 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 19 00:23:34 crc kubenswrapper[4825]: E0219 00:23:34.806970 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-alertmanager-proxy-tls podName:0c751ba6-cfab-49ff-9243-e332977dfee1 nodeName:}" failed. No retries permitted until 2026-02-19 00:23:38.806941888 +0000 UTC m=+964.497907945 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "0c751ba6-cfab-49ff-9243-e332977dfee1") : secret "default-alertmanager-proxy-tls" not found Feb 19 00:23:38 crc kubenswrapper[4825]: I0219 00:23:38.883432 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:38 crc kubenswrapper[4825]: I0219 00:23:38.890878 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c751ba6-cfab-49ff-9243-e332977dfee1-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"0c751ba6-cfab-49ff-9243-e332977dfee1\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:38 crc kubenswrapper[4825]: I0219 00:23:38.916230 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-qb5xf" Feb 19 00:23:38 crc kubenswrapper[4825]: I0219 00:23:38.924914 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 19 00:23:39 crc kubenswrapper[4825]: I0219 00:23:39.619554 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 19 00:23:39 crc kubenswrapper[4825]: I0219 00:23:39.753390 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-c2z9k" event={"ID":"b7420f29-5b9e-4801-9ad8-1ed62185e445","Type":"ContainerStarted","Data":"fb7fd139134bd31e700987ebd948d44ca58f204b22827ff21528285f9cfabbe6"} Feb 19 00:23:39 crc kubenswrapper[4825]: I0219 00:23:39.755098 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"0c751ba6-cfab-49ff-9243-e332977dfee1","Type":"ContainerStarted","Data":"ccca7fc4690277df546220adc3af3c6bd05ebf82f5fb8a25f5cf166ccaef8b9d"} Feb 19 00:23:39 crc kubenswrapper[4825]: I0219 00:23:39.770404 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-c2z9k" podStartSLOduration=1.870038912 podStartE2EDuration="12.770383325s" podCreationTimestamp="2026-02-19 00:23:27 +0000 UTC" firstStartedPulling="2026-02-19 00:23:28.224698568 +0000 UTC m=+953.915664615" lastFinishedPulling="2026-02-19 00:23:39.125042981 +0000 UTC m=+964.816009028" observedRunningTime="2026-02-19 00:23:39.767336814 +0000 UTC m=+965.458302851" watchObservedRunningTime="2026-02-19 00:23:39.770383325 +0000 UTC m=+965.461349372" Feb 19 00:23:41 crc kubenswrapper[4825]: I0219 00:23:41.775645 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"0c751ba6-cfab-49ff-9243-e332977dfee1","Type":"ContainerStarted","Data":"950513e04e8485e8892c29c9d1397796afdda6c6d8175c1b7401c322808ca39a"} Feb 19 00:23:46 crc kubenswrapper[4825]: I0219 00:23:46.819546 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1a50e579-ea38-4bf0-bfd6-805ef1a6be97","Type":"ContainerStarted","Data":"5ff0ce1890c93307b61401497009d9c4d528fc098c836c70346031620db5b505"} Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.278402 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq"] Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.281784 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.286450 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.286772 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-2m7gq" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.286896 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.287087 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.293385 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq"] Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.463188 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c855158e-a4dc-467a-9d2b-923761e2cb45-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.463712 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c855158e-a4dc-467a-9d2b-923761e2cb45-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.463734 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbq6z\" (UniqueName: \"kubernetes.io/projected/c855158e-a4dc-467a-9d2b-923761e2cb45-kube-api-access-sbq6z\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.464498 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c855158e-a4dc-467a-9d2b-923761e2cb45-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.464666 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c855158e-a4dc-467a-9d2b-923761e2cb45-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.566897 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c855158e-a4dc-467a-9d2b-923761e2cb45-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.566978 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c855158e-a4dc-467a-9d2b-923761e2cb45-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.567036 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c855158e-a4dc-467a-9d2b-923761e2cb45-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.567064 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbq6z\" (UniqueName: \"kubernetes.io/projected/c855158e-a4dc-467a-9d2b-923761e2cb45-kube-api-access-sbq6z\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.567141 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c855158e-a4dc-467a-9d2b-923761e2cb45-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:48 crc kubenswrapper[4825]: E0219 00:23:48.567177 4825 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 19 00:23:48 crc kubenswrapper[4825]: E0219 00:23:48.567272 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c855158e-a4dc-467a-9d2b-923761e2cb45-default-cloud1-coll-meter-proxy-tls podName:c855158e-a4dc-467a-9d2b-923761e2cb45 nodeName:}" failed. No retries permitted until 2026-02-19 00:23:49.067245186 +0000 UTC m=+974.758211423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/c855158e-a4dc-467a-9d2b-923761e2cb45-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" (UID: "c855158e-a4dc-467a-9d2b-923761e2cb45") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.567995 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c855158e-a4dc-467a-9d2b-923761e2cb45-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.568494 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c855158e-a4dc-467a-9d2b-923761e2cb45-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.584669 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c855158e-a4dc-467a-9d2b-923761e2cb45-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.590860 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbq6z\" (UniqueName: \"kubernetes.io/projected/c855158e-a4dc-467a-9d2b-923761e2cb45-kube-api-access-sbq6z\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:48 crc kubenswrapper[4825]: I0219 00:23:48.864268 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1a50e579-ea38-4bf0-bfd6-805ef1a6be97","Type":"ContainerStarted","Data":"55a02393bbd002419b2a1d3866d424e1d9f77926f14de361029427a029cda10b"} Feb 19 00:23:49 crc kubenswrapper[4825]: I0219 00:23:49.074009 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c855158e-a4dc-467a-9d2b-923761e2cb45-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:49 crc kubenswrapper[4825]: E0219 00:23:49.074372 4825 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 19 00:23:49 crc kubenswrapper[4825]: E0219 00:23:49.074493 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c855158e-a4dc-467a-9d2b-923761e2cb45-default-cloud1-coll-meter-proxy-tls podName:c855158e-a4dc-467a-9d2b-923761e2cb45 nodeName:}" failed. No retries permitted until 2026-02-19 00:23:50.074461599 +0000 UTC m=+975.765427776 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/c855158e-a4dc-467a-9d2b-923761e2cb45-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" (UID: "c855158e-a4dc-467a-9d2b-923761e2cb45") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 19 00:23:50 crc kubenswrapper[4825]: I0219 00:23:50.097673 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c855158e-a4dc-467a-9d2b-923761e2cb45-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:50 crc kubenswrapper[4825]: I0219 00:23:50.105290 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c855158e-a4dc-467a-9d2b-923761e2cb45-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-789zq\" (UID: \"c855158e-a4dc-467a-9d2b-923761e2cb45\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:50 crc kubenswrapper[4825]: I0219 00:23:50.113114 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" Feb 19 00:23:50 crc kubenswrapper[4825]: I0219 00:23:50.593151 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq"] Feb 19 00:23:50 crc kubenswrapper[4825]: I0219 00:23:50.812043 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd"] Feb 19 00:23:50 crc kubenswrapper[4825]: I0219 00:23:50.813602 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:50 crc kubenswrapper[4825]: I0219 00:23:50.820030 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd"] Feb 19 00:23:50 crc kubenswrapper[4825]: I0219 00:23:50.821973 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Feb 19 00:23:50 crc kubenswrapper[4825]: I0219 00:23:50.822040 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Feb 19 00:23:50 crc kubenswrapper[4825]: I0219 00:23:50.883545 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" event={"ID":"c855158e-a4dc-467a-9d2b-923761e2cb45","Type":"ContainerStarted","Data":"2ebe4bb1fd1c9b0912eb4731075dbb1e18bf4e6ec1089ee2dc7f2a64ab6e0729"} Feb 19 00:23:50 crc kubenswrapper[4825]: I0219 00:23:50.885925 4825 generic.go:334] "Generic (PLEG): container finished" podID="0c751ba6-cfab-49ff-9243-e332977dfee1" containerID="950513e04e8485e8892c29c9d1397796afdda6c6d8175c1b7401c322808ca39a" exitCode=0 Feb 19 00:23:50 crc kubenswrapper[4825]: I0219 00:23:50.885988 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"0c751ba6-cfab-49ff-9243-e332977dfee1","Type":"ContainerDied","Data":"950513e04e8485e8892c29c9d1397796afdda6c6d8175c1b7401c322808ca39a"} Feb 19 00:23:51 crc kubenswrapper[4825]: I0219 00:23:51.019793 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:51 crc kubenswrapper[4825]: I0219 00:23:51.019949 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:51 crc kubenswrapper[4825]: I0219 00:23:51.020015 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqbk4\" (UniqueName: \"kubernetes.io/projected/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-kube-api-access-jqbk4\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:51 crc kubenswrapper[4825]: I0219 00:23:51.020047 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:51 crc kubenswrapper[4825]: I0219 00:23:51.020149 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:51 crc kubenswrapper[4825]: I0219 00:23:51.121245 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqbk4\" (UniqueName: \"kubernetes.io/projected/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-kube-api-access-jqbk4\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:51 crc kubenswrapper[4825]: I0219 00:23:51.122672 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:51 crc kubenswrapper[4825]: I0219 00:23:51.122872 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:51 crc kubenswrapper[4825]: E0219 00:23:51.122951 4825 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 19 00:23:51 crc kubenswrapper[4825]: I0219 00:23:51.123076 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:51 crc kubenswrapper[4825]: E0219 00:23:51.123199 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-default-cloud1-ceil-meter-proxy-tls podName:1deef3e8-3d46-4e72-a60e-3d5166dc6a4b nodeName:}" failed. No retries permitted until 2026-02-19 00:23:51.623145803 +0000 UTC m=+977.314111850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" (UID: "1deef3e8-3d46-4e72-a60e-3d5166dc6a4b") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 19 00:23:51 crc kubenswrapper[4825]: I0219 00:23:51.123289 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:51 crc kubenswrapper[4825]: I0219 00:23:51.123477 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:51 crc kubenswrapper[4825]: I0219 00:23:51.124378 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:51 crc kubenswrapper[4825]: I0219 00:23:51.138117 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:51 crc kubenswrapper[4825]: I0219 00:23:51.141946 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqbk4\" (UniqueName: \"kubernetes.io/projected/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-kube-api-access-jqbk4\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:51 crc kubenswrapper[4825]: I0219 00:23:51.629595 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:51 crc kubenswrapper[4825]: E0219 00:23:51.629813 4825 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 19 00:23:51 crc kubenswrapper[4825]: E0219 00:23:51.630154 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-default-cloud1-ceil-meter-proxy-tls podName:1deef3e8-3d46-4e72-a60e-3d5166dc6a4b nodeName:}" failed. No retries permitted until 2026-02-19 00:23:52.63013359 +0000 UTC m=+978.321099637 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" (UID: "1deef3e8-3d46-4e72-a60e-3d5166dc6a4b") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 19 00:23:52 crc kubenswrapper[4825]: I0219 00:23:52.646677 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:52 crc kubenswrapper[4825]: I0219 00:23:52.651829 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1deef3e8-3d46-4e72-a60e-3d5166dc6a4b-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd\" (UID: \"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:52 crc kubenswrapper[4825]: I0219 00:23:52.940336 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" Feb 19 00:23:54 crc kubenswrapper[4825]: I0219 00:23:54.103729 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd"] Feb 19 00:23:54 crc kubenswrapper[4825]: I0219 00:23:54.913588 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" event={"ID":"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b","Type":"ContainerStarted","Data":"1978f7193c8191c92b1ead37bb6debdcdaf7f0512af3d979f821d0bdead77308"} Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.213545 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh"] Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.216352 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.219105 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.219146 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.233173 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh"] Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.296520 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.297009 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.297194 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.297347 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.297780 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q28d5\" (UniqueName: \"kubernetes.io/projected/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-kube-api-access-q28d5\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.399800 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:55 crc kubenswrapper[4825]: E0219 00:23:55.399982 4825 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 19 00:23:55 crc kubenswrapper[4825]: E0219 00:23:55.400498 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-default-cloud1-sens-meter-proxy-tls podName:ff2e035a-3702-489c-ad4b-b2892b4e8ac9 nodeName:}" failed. No retries permitted until 2026-02-19 00:23:55.900463523 +0000 UTC m=+981.591429570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" (UID: "ff2e035a-3702-489c-ad4b-b2892b4e8ac9") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.401171 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.401529 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.402757 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q28d5\" (UniqueName: \"kubernetes.io/projected/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-kube-api-access-q28d5\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.403168 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.405262 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.405559 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.410250 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.418954 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q28d5\" (UniqueName: \"kubernetes.io/projected/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-kube-api-access-q28d5\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:55 crc kubenswrapper[4825]: I0219 00:23:55.913065 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:55 crc kubenswrapper[4825]: E0219 00:23:55.913311 4825 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 19 00:23:55 crc kubenswrapper[4825]: E0219 00:23:55.913387 4825 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-default-cloud1-sens-meter-proxy-tls podName:ff2e035a-3702-489c-ad4b-b2892b4e8ac9 nodeName:}" failed. No retries permitted until 2026-02-19 00:23:56.913362018 +0000 UTC m=+982.604328065 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" (UID: "ff2e035a-3702-489c-ad4b-b2892b4e8ac9") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 19 00:23:56 crc kubenswrapper[4825]: I0219 00:23:56.947139 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:56 crc kubenswrapper[4825]: I0219 00:23:56.951047 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff2e035a-3702-489c-ad4b-b2892b4e8ac9-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh\" (UID: \"ff2e035a-3702-489c-ad4b-b2892b4e8ac9\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:57 crc kubenswrapper[4825]: I0219 00:23:57.041070 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" Feb 19 00:23:58 crc kubenswrapper[4825]: I0219 00:23:58.823744 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:23:58 crc kubenswrapper[4825]: I0219 00:23:58.824300 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:24:01 crc kubenswrapper[4825]: E0219 00:24:01.341645 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/openshift/origin-oauth-proxy:latest" Feb 19 00:24:01 crc kubenswrapper[4825]: E0219 00:24:01.342381 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:oauth-proxy,Image:quay.io/openshift/origin-oauth-proxy:latest,Command:[],Args:[-https-address=:9092 -tls-cert=/etc/tls/private/tls.crt -tls-key=/etc/tls/private/tls.key -upstream=http://localhost:9090/ -cookie-secret-file=/etc/proxy/secrets/session_secret -openshift-service-account=prometheus-stf -openshift-sar={\"namespace\":\"service-telemetry\",\"resource\": \"prometheuses\", \"resourceAPIGroup\":\"monitoring.rhobs\", \"verb\":\"get\"} -openshift-delegate-urls={\"/\":{\"namespace\":\"service-telemetry\",\"resource\": \"prometheuses\", \"group\":\"monitoring.rhobs\", \"verb\":\"get\"}}],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:9092,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:secret-default-prometheus-proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secret-default-session-secret,ReadOnly:false,MountPath:/etc/proxy/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9xh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-default-0_service-telemetry(1a50e579-ea38-4bf0-bfd6-805ef1a6be97): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:24:01 crc kubenswrapper[4825]: E0219 00:24:01.343485 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/prometheus-default-0" podUID="1a50e579-ea38-4bf0-bfd6-805ef1a6be97" Feb 19 00:24:01 crc kubenswrapper[4825]: I0219 00:24:01.670434 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh"] Feb 19 00:24:01 crc kubenswrapper[4825]: I0219 00:24:01.966608 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" event={"ID":"c855158e-a4dc-467a-9d2b-923761e2cb45","Type":"ContainerStarted","Data":"e89ddd2f22c3ff2245bb76954b5a1e6aa8f4b4f639665d55db4936228005de60"} Feb 19 00:24:01 crc kubenswrapper[4825]: I0219 00:24:01.968465 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" event={"ID":"ff2e035a-3702-489c-ad4b-b2892b4e8ac9","Type":"ContainerStarted","Data":"99245065f7cd4640aa0ee96458e6223f702a7a0cc261ae613ee88a79f6977b9a"} Feb 19 00:24:01 crc kubenswrapper[4825]: I0219 00:24:01.971680 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" event={"ID":"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b","Type":"ContainerStarted","Data":"24c941f5922e131f9fd48b842577bdbc25c3457590f7c0c22c924224851c05e9"} Feb 19 00:24:02 crc kubenswrapper[4825]: E0219 00:24:02.292491 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift/origin-oauth-proxy:latest\\\"\"" pod="service-telemetry/prometheus-default-0" podUID="1a50e579-ea38-4bf0-bfd6-805ef1a6be97" Feb 19 00:24:02 crc kubenswrapper[4825]: I0219 00:24:02.982123 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" event={"ID":"ff2e035a-3702-489c-ad4b-b2892b4e8ac9","Type":"ContainerStarted","Data":"da2a1cf29b8d7cbce35d24d790aa230645e3023d8956e916d76544e51544a8e6"} Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.489360 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7"] Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.490895 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.493758 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.494341 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.494729 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7"] Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.562281 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hplpq\" (UniqueName: \"kubernetes.io/projected/f0225b38-d01b-4ce0-98f7-7032c8719113-kube-api-access-hplpq\") pod \"default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7\" (UID: \"f0225b38-d01b-4ce0-98f7-7032c8719113\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.562344 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f0225b38-d01b-4ce0-98f7-7032c8719113-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7\" (UID: \"f0225b38-d01b-4ce0-98f7-7032c8719113\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.562413 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f0225b38-d01b-4ce0-98f7-7032c8719113-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7\" (UID: \"f0225b38-d01b-4ce0-98f7-7032c8719113\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.562445 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f0225b38-d01b-4ce0-98f7-7032c8719113-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7\" (UID: \"f0225b38-d01b-4ce0-98f7-7032c8719113\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.637141 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.639211 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.667639 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hplpq\" (UniqueName: \"kubernetes.io/projected/f0225b38-d01b-4ce0-98f7-7032c8719113-kube-api-access-hplpq\") pod \"default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7\" (UID: \"f0225b38-d01b-4ce0-98f7-7032c8719113\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.667702 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f0225b38-d01b-4ce0-98f7-7032c8719113-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7\" (UID: \"f0225b38-d01b-4ce0-98f7-7032c8719113\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.667755 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f0225b38-d01b-4ce0-98f7-7032c8719113-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7\" (UID: \"f0225b38-d01b-4ce0-98f7-7032c8719113\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.667791 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f0225b38-d01b-4ce0-98f7-7032c8719113-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7\" (UID: \"f0225b38-d01b-4ce0-98f7-7032c8719113\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.668647 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f0225b38-d01b-4ce0-98f7-7032c8719113-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7\" (UID: \"f0225b38-d01b-4ce0-98f7-7032c8719113\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.670273 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f0225b38-d01b-4ce0-98f7-7032c8719113-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7\" (UID: \"f0225b38-d01b-4ce0-98f7-7032c8719113\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.682244 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f0225b38-d01b-4ce0-98f7-7032c8719113-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7\" (UID: \"f0225b38-d01b-4ce0-98f7-7032c8719113\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.685378 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hplpq\" (UniqueName: \"kubernetes.io/projected/f0225b38-d01b-4ce0-98f7-7032c8719113-kube-api-access-hplpq\") pod \"default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7\" (UID: \"f0225b38-d01b-4ce0-98f7-7032c8719113\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.691784 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Feb 19 00:24:03 crc kubenswrapper[4825]: I0219 00:24:03.822796 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" Feb 19 00:24:03 crc kubenswrapper[4825]: E0219 00:24:03.844678 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift/origin-oauth-proxy:latest\\\"\"" pod="service-telemetry/prometheus-default-0" podUID="1a50e579-ea38-4bf0-bfd6-805ef1a6be97" Feb 19 00:24:04 crc kubenswrapper[4825]: I0219 00:24:04.059615 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Feb 19 00:24:04 crc kubenswrapper[4825]: E0219 00:24:04.091261 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift/origin-oauth-proxy:latest\\\"\"" pod="service-telemetry/prometheus-default-0" podUID="1a50e579-ea38-4bf0-bfd6-805ef1a6be97" Feb 19 00:24:04 crc kubenswrapper[4825]: I0219 00:24:04.559444 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7"] Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.008478 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" event={"ID":"f0225b38-d01b-4ce0-98f7-7032c8719113","Type":"ContainerStarted","Data":"1fde28e70d7b7a326136005a59ec35b22a37e437cdc1ec44fd46ff5d885722b1"} Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.015671 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"0c751ba6-cfab-49ff-9243-e332977dfee1","Type":"ContainerStarted","Data":"b72662ac0b2f1cc3b91f25f339dd4edbca447e46387e79a68104dfaaa4385def"} Feb 19 00:24:05 crc kubenswrapper[4825]: E0219 00:24:05.019610 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift/origin-oauth-proxy:latest\\\"\"" pod="service-telemetry/prometheus-default-0" podUID="1a50e579-ea38-4bf0-bfd6-805ef1a6be97" Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.166261 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2"] Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.167375 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.178303 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2"] Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.180450 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.300402 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/af8d216a-4e33-4378-a149-fcfa67478d93-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2\" (UID: \"af8d216a-4e33-4378-a149-fcfa67478d93\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.300564 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/af8d216a-4e33-4378-a149-fcfa67478d93-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2\" (UID: \"af8d216a-4e33-4378-a149-fcfa67478d93\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.300613 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/af8d216a-4e33-4378-a149-fcfa67478d93-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2\" (UID: \"af8d216a-4e33-4378-a149-fcfa67478d93\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.300859 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x86dv\" (UniqueName: \"kubernetes.io/projected/af8d216a-4e33-4378-a149-fcfa67478d93-kube-api-access-x86dv\") pod \"default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2\" (UID: \"af8d216a-4e33-4378-a149-fcfa67478d93\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.402556 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x86dv\" (UniqueName: \"kubernetes.io/projected/af8d216a-4e33-4378-a149-fcfa67478d93-kube-api-access-x86dv\") pod \"default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2\" (UID: \"af8d216a-4e33-4378-a149-fcfa67478d93\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.402643 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/af8d216a-4e33-4378-a149-fcfa67478d93-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2\" (UID: \"af8d216a-4e33-4378-a149-fcfa67478d93\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.402694 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/af8d216a-4e33-4378-a149-fcfa67478d93-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2\" (UID: \"af8d216a-4e33-4378-a149-fcfa67478d93\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.402759 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/af8d216a-4e33-4378-a149-fcfa67478d93-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2\" (UID: \"af8d216a-4e33-4378-a149-fcfa67478d93\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.403621 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/af8d216a-4e33-4378-a149-fcfa67478d93-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2\" (UID: \"af8d216a-4e33-4378-a149-fcfa67478d93\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.403908 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/af8d216a-4e33-4378-a149-fcfa67478d93-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2\" (UID: \"af8d216a-4e33-4378-a149-fcfa67478d93\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.417555 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/af8d216a-4e33-4378-a149-fcfa67478d93-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2\" (UID: \"af8d216a-4e33-4378-a149-fcfa67478d93\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.423612 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x86dv\" (UniqueName: \"kubernetes.io/projected/af8d216a-4e33-4378-a149-fcfa67478d93-kube-api-access-x86dv\") pod \"default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2\" (UID: \"af8d216a-4e33-4378-a149-fcfa67478d93\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.494335 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" Feb 19 00:24:05 crc kubenswrapper[4825]: I0219 00:24:05.976382 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2"] Feb 19 00:24:05 crc kubenswrapper[4825]: W0219 00:24:05.989295 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf8d216a_4e33_4378_a149_fcfa67478d93.slice/crio-db116fb081980d180de9a738aca2c2fadc1d1f6c603289161a50acdfcd7a1569 WatchSource:0}: Error finding container db116fb081980d180de9a738aca2c2fadc1d1f6c603289161a50acdfcd7a1569: Status 404 returned error can't find the container with id db116fb081980d180de9a738aca2c2fadc1d1f6c603289161a50acdfcd7a1569 Feb 19 00:24:06 crc kubenswrapper[4825]: I0219 00:24:06.025160 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" event={"ID":"af8d216a-4e33-4378-a149-fcfa67478d93","Type":"ContainerStarted","Data":"db116fb081980d180de9a738aca2c2fadc1d1f6c603289161a50acdfcd7a1569"} Feb 19 00:24:08 crc kubenswrapper[4825]: I0219 00:24:08.047008 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"0c751ba6-cfab-49ff-9243-e332977dfee1","Type":"ContainerStarted","Data":"29d38d50512ef26f3a0400f8e5b7998d884cb1356eb6689d721a5f4fd592d9dd"} Feb 19 00:24:12 crc kubenswrapper[4825]: I0219 00:24:12.083890 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" event={"ID":"f0225b38-d01b-4ce0-98f7-7032c8719113","Type":"ContainerStarted","Data":"4fadecfdbf7780a8574b1c34424c462895be71e082489d9fab6b3a0ebfdacea2"} Feb 19 00:24:13 crc kubenswrapper[4825]: I0219 00:24:13.102522 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"0c751ba6-cfab-49ff-9243-e332977dfee1","Type":"ContainerStarted","Data":"da77f73950b9449660bd114ace54af264a450a473f40fb67b31c97b43a259d39"} Feb 19 00:24:13 crc kubenswrapper[4825]: I0219 00:24:13.110073 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" event={"ID":"c855158e-a4dc-467a-9d2b-923761e2cb45","Type":"ContainerStarted","Data":"7185a536b97eb84c0e6f87a2b3e263bd8202e5d2569476a2d3f87f87d3b12f43"} Feb 19 00:24:13 crc kubenswrapper[4825]: I0219 00:24:13.112775 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" event={"ID":"ff2e035a-3702-489c-ad4b-b2892b4e8ac9","Type":"ContainerStarted","Data":"f10ed9796fbaec437cfa8ced511827cef81a2866e749b0c1fcc7313b4d64a76e"} Feb 19 00:24:13 crc kubenswrapper[4825]: I0219 00:24:13.114671 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" event={"ID":"af8d216a-4e33-4378-a149-fcfa67478d93","Type":"ContainerStarted","Data":"6845113832a467d855e6e224ce9d0c1605694ca9b32eff18cdb24e9c0690f948"} Feb 19 00:24:13 crc kubenswrapper[4825]: I0219 00:24:13.116965 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" event={"ID":"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b","Type":"ContainerStarted","Data":"8ec4a3c3fc9e22953f56fc82a629392ad83a2311793aefaff906c5f05690a7f3"} Feb 19 00:24:13 crc kubenswrapper[4825]: I0219 00:24:13.132818 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=21.923265765 podStartE2EDuration="43.132795587s" podCreationTimestamp="2026-02-19 00:23:30 +0000 UTC" firstStartedPulling="2026-02-19 00:23:50.889215703 +0000 UTC m=+976.580181750" lastFinishedPulling="2026-02-19 00:24:12.098745525 +0000 UTC m=+997.789711572" observedRunningTime="2026-02-19 00:24:13.130726342 +0000 UTC m=+998.821692399" watchObservedRunningTime="2026-02-19 00:24:13.132795587 +0000 UTC m=+998.823761654" Feb 19 00:24:18 crc kubenswrapper[4825]: I0219 00:24:18.541715 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-2ldkb"] Feb 19 00:24:18 crc kubenswrapper[4825]: I0219 00:24:18.542401 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" podUID="c5872ac9-2a95-4315-b1c9-f31ad523fc98" containerName="default-interconnect" containerID="cri-o://6439ced09639960b33b272aab27055f8cf08da9a79ea4c73b79d66caeff16877" gracePeriod=30 Feb 19 00:24:19 crc kubenswrapper[4825]: I0219 00:24:19.199162 4825 generic.go:334] "Generic (PLEG): container finished" podID="f0225b38-d01b-4ce0-98f7-7032c8719113" containerID="4fadecfdbf7780a8574b1c34424c462895be71e082489d9fab6b3a0ebfdacea2" exitCode=0 Feb 19 00:24:19 crc kubenswrapper[4825]: I0219 00:24:19.199578 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" event={"ID":"f0225b38-d01b-4ce0-98f7-7032c8719113","Type":"ContainerDied","Data":"4fadecfdbf7780a8574b1c34424c462895be71e082489d9fab6b3a0ebfdacea2"} Feb 19 00:24:19 crc kubenswrapper[4825]: I0219 00:24:19.201983 4825 generic.go:334] "Generic (PLEG): container finished" podID="c5872ac9-2a95-4315-b1c9-f31ad523fc98" containerID="6439ced09639960b33b272aab27055f8cf08da9a79ea4c73b79d66caeff16877" exitCode=0 Feb 19 00:24:19 crc kubenswrapper[4825]: I0219 00:24:19.202025 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" event={"ID":"c5872ac9-2a95-4315-b1c9-f31ad523fc98","Type":"ContainerDied","Data":"6439ced09639960b33b272aab27055f8cf08da9a79ea4c73b79d66caeff16877"} Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.227343 4825 generic.go:334] "Generic (PLEG): container finished" podID="c855158e-a4dc-467a-9d2b-923761e2cb45" containerID="7185a536b97eb84c0e6f87a2b3e263bd8202e5d2569476a2d3f87f87d3b12f43" exitCode=0 Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.227423 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" event={"ID":"c855158e-a4dc-467a-9d2b-923761e2cb45","Type":"ContainerDied","Data":"7185a536b97eb84c0e6f87a2b3e263bd8202e5d2569476a2d3f87f87d3b12f43"} Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.231117 4825 generic.go:334] "Generic (PLEG): container finished" podID="ff2e035a-3702-489c-ad4b-b2892b4e8ac9" containerID="f10ed9796fbaec437cfa8ced511827cef81a2866e749b0c1fcc7313b4d64a76e" exitCode=0 Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.231185 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" event={"ID":"ff2e035a-3702-489c-ad4b-b2892b4e8ac9","Type":"ContainerDied","Data":"f10ed9796fbaec437cfa8ced511827cef81a2866e749b0c1fcc7313b4d64a76e"} Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.237204 4825 generic.go:334] "Generic (PLEG): container finished" podID="af8d216a-4e33-4378-a149-fcfa67478d93" containerID="6845113832a467d855e6e224ce9d0c1605694ca9b32eff18cdb24e9c0690f948" exitCode=0 Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.237291 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" event={"ID":"af8d216a-4e33-4378-a149-fcfa67478d93","Type":"ContainerDied","Data":"6845113832a467d855e6e224ce9d0c1605694ca9b32eff18cdb24e9c0690f948"} Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.241789 4825 generic.go:334] "Generic (PLEG): container finished" podID="1deef3e8-3d46-4e72-a60e-3d5166dc6a4b" containerID="8ec4a3c3fc9e22953f56fc82a629392ad83a2311793aefaff906c5f05690a7f3" exitCode=0 Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.241848 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" event={"ID":"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b","Type":"ContainerDied","Data":"8ec4a3c3fc9e22953f56fc82a629392ad83a2311793aefaff906c5f05690a7f3"} Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.568850 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.624802 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-z4gnb"] Feb 19 00:24:20 crc kubenswrapper[4825]: E0219 00:24:20.625168 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5872ac9-2a95-4315-b1c9-f31ad523fc98" containerName="default-interconnect" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.625187 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5872ac9-2a95-4315-b1c9-f31ad523fc98" containerName="default-interconnect" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.625327 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5872ac9-2a95-4315-b1c9-f31ad523fc98" containerName="default-interconnect" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.625945 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.634275 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-z4gnb"] Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.722254 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c5872ac9-2a95-4315-b1c9-f31ad523fc98-sasl-config\") pod \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.722376 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-openstack-credentials\") pod \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.722459 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-sasl-users\") pod \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.722561 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-openstack-ca\") pod \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.722638 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-inter-router-credentials\") pod \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.722709 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fzrp\" (UniqueName: \"kubernetes.io/projected/c5872ac9-2a95-4315-b1c9-f31ad523fc98-kube-api-access-6fzrp\") pod \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.722791 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-inter-router-ca\") pod \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\" (UID: \"c5872ac9-2a95-4315-b1c9-f31ad523fc98\") " Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.723032 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-sasl-config\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.723071 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.723122 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.723166 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.723212 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.723214 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5872ac9-2a95-4315-b1c9-f31ad523fc98-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "c5872ac9-2a95-4315-b1c9-f31ad523fc98" (UID: "c5872ac9-2a95-4315-b1c9-f31ad523fc98"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.723251 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-sasl-users\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.723564 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8h9r\" (UniqueName: \"kubernetes.io/projected/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-kube-api-access-x8h9r\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.723662 4825 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c5872ac9-2a95-4315-b1c9-f31ad523fc98-sasl-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.743026 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "c5872ac9-2a95-4315-b1c9-f31ad523fc98" (UID: "c5872ac9-2a95-4315-b1c9-f31ad523fc98"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.743030 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5872ac9-2a95-4315-b1c9-f31ad523fc98-kube-api-access-6fzrp" (OuterVolumeSpecName: "kube-api-access-6fzrp") pod "c5872ac9-2a95-4315-b1c9-f31ad523fc98" (UID: "c5872ac9-2a95-4315-b1c9-f31ad523fc98"). InnerVolumeSpecName "kube-api-access-6fzrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.743545 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "c5872ac9-2a95-4315-b1c9-f31ad523fc98" (UID: "c5872ac9-2a95-4315-b1c9-f31ad523fc98"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.746753 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "c5872ac9-2a95-4315-b1c9-f31ad523fc98" (UID: "c5872ac9-2a95-4315-b1c9-f31ad523fc98"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.750917 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "c5872ac9-2a95-4315-b1c9-f31ad523fc98" (UID: "c5872ac9-2a95-4315-b1c9-f31ad523fc98"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.751018 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "c5872ac9-2a95-4315-b1c9-f31ad523fc98" (UID: "c5872ac9-2a95-4315-b1c9-f31ad523fc98"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.824934 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.825524 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.825557 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.825594 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-sasl-users\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.825634 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8h9r\" (UniqueName: \"kubernetes.io/projected/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-kube-api-access-x8h9r\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.825679 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-sasl-config\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.825700 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.825745 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fzrp\" (UniqueName: \"kubernetes.io/projected/c5872ac9-2a95-4315-b1c9-f31ad523fc98-kube-api-access-6fzrp\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.825758 4825 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.825770 4825 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.825780 4825 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-sasl-users\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.825857 4825 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.825898 4825 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c5872ac9-2a95-4315-b1c9-f31ad523fc98-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.829759 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.830560 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-sasl-config\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.834266 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.838600 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.848250 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8h9r\" (UniqueName: \"kubernetes.io/projected/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-kube-api-access-x8h9r\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.849420 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-sasl-users\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:20 crc kubenswrapper[4825]: I0219 00:24:20.850846 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a10b7ca9-ed06-411b-a849-c9b70b89c7bc-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-z4gnb\" (UID: \"a10b7ca9-ed06-411b-a849-c9b70b89c7bc\") " pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.006012 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.255236 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" event={"ID":"c5872ac9-2a95-4315-b1c9-f31ad523fc98","Type":"ContainerDied","Data":"991c03b139436a831ea139ff7019689348dd9000281523a3790e05c1d1d1fb58"} Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.256086 4825 scope.go:117] "RemoveContainer" containerID="6439ced09639960b33b272aab27055f8cf08da9a79ea4c73b79d66caeff16877" Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.256297 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-2ldkb" Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.259457 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" event={"ID":"ff2e035a-3702-489c-ad4b-b2892b4e8ac9","Type":"ContainerStarted","Data":"78f751c25089c5e26d9db2dde36a7cdaac13e3f133db81bc8a20baa91cdf6a06"} Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.260342 4825 scope.go:117] "RemoveContainer" containerID="f10ed9796fbaec437cfa8ced511827cef81a2866e749b0c1fcc7313b4d64a76e" Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.261694 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" event={"ID":"af8d216a-4e33-4378-a149-fcfa67478d93","Type":"ContainerStarted","Data":"b89d02d87ac788c557beaa2c6f882fb45e36a86231070924fb9c79fd3933b7f7"} Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.262439 4825 scope.go:117] "RemoveContainer" containerID="6845113832a467d855e6e224ce9d0c1605694ca9b32eff18cdb24e9c0690f948" Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.268767 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" event={"ID":"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b","Type":"ContainerStarted","Data":"9a7674d2e0e0c8a5d39ffc32e74cb327d366b81b974e487e4c4326194c1ea11e"} Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.269296 4825 scope.go:117] "RemoveContainer" containerID="8ec4a3c3fc9e22953f56fc82a629392ad83a2311793aefaff906c5f05690a7f3" Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.274472 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" event={"ID":"f0225b38-d01b-4ce0-98f7-7032c8719113","Type":"ContainerStarted","Data":"de541b93af5f910706295c5e0979ebb8b8d5f26432a0d32f7c09fd09c21a2a0a"} Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.275130 4825 scope.go:117] "RemoveContainer" containerID="4fadecfdbf7780a8574b1c34424c462895be71e082489d9fab6b3a0ebfdacea2" Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.293783 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1a50e579-ea38-4bf0-bfd6-805ef1a6be97","Type":"ContainerStarted","Data":"f729023af688b8ccc0608f5a66ceaa8c65d30dfcc613d659cf0e8e3246c79bdb"} Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.310038 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" event={"ID":"c855158e-a4dc-467a-9d2b-923761e2cb45","Type":"ContainerStarted","Data":"d50ea81c2b3bc4da6f1cb86128b4ad4675d7025dd96c6ae2f51ea798cdd23e08"} Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.311343 4825 scope.go:117] "RemoveContainer" containerID="7185a536b97eb84c0e6f87a2b3e263bd8202e5d2569476a2d3f87f87d3b12f43" Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.341576 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-2ldkb"] Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.348870 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-2ldkb"] Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.368666 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.22079528 podStartE2EDuration="1m6.368640033s" podCreationTimestamp="2026-02-19 00:23:15 +0000 UTC" firstStartedPulling="2026-02-19 00:23:18.901456992 +0000 UTC m=+944.592423039" lastFinishedPulling="2026-02-19 00:24:21.049301745 +0000 UTC m=+1006.740267792" observedRunningTime="2026-02-19 00:24:21.360747462 +0000 UTC m=+1007.051713529" watchObservedRunningTime="2026-02-19 00:24:21.368640033 +0000 UTC m=+1007.059606080" Feb 19 00:24:21 crc kubenswrapper[4825]: W0219 00:24:21.447320 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda10b7ca9_ed06_411b_a849_c9b70b89c7bc.slice/crio-87ac72e1807596d6d6527d25d37114783b4271c51654cff31bcdf27dd984a08d WatchSource:0}: Error finding container 87ac72e1807596d6d6527d25d37114783b4271c51654cff31bcdf27dd984a08d: Status 404 returned error can't find the container with id 87ac72e1807596d6d6527d25d37114783b4271c51654cff31bcdf27dd984a08d Feb 19 00:24:21 crc kubenswrapper[4825]: I0219 00:24:21.447804 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-z4gnb"] Feb 19 00:24:22 crc kubenswrapper[4825]: I0219 00:24:22.325360 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" event={"ID":"ff2e035a-3702-489c-ad4b-b2892b4e8ac9","Type":"ContainerStarted","Data":"45b1b701eadcade95441859985b33e3e8f77ee4ce05479a4fa8a081aa7f75adf"} Feb 19 00:24:22 crc kubenswrapper[4825]: I0219 00:24:22.328435 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" event={"ID":"af8d216a-4e33-4378-a149-fcfa67478d93","Type":"ContainerStarted","Data":"2d35e78e137be37caad85e7d06a235f8e17f04a75e4094a90cd9124c4eafddb8"} Feb 19 00:24:22 crc kubenswrapper[4825]: I0219 00:24:22.331514 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" event={"ID":"a10b7ca9-ed06-411b-a849-c9b70b89c7bc","Type":"ContainerStarted","Data":"73e313a14162e32d38595e8951db9748b44b2ce0301dd991e23b7a17b67aa061"} Feb 19 00:24:22 crc kubenswrapper[4825]: I0219 00:24:22.331885 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" event={"ID":"a10b7ca9-ed06-411b-a849-c9b70b89c7bc","Type":"ContainerStarted","Data":"87ac72e1807596d6d6527d25d37114783b4271c51654cff31bcdf27dd984a08d"} Feb 19 00:24:22 crc kubenswrapper[4825]: I0219 00:24:22.341126 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" event={"ID":"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b","Type":"ContainerStarted","Data":"3d1200489102c65ef974d101f6157ff66e3b530057c27f5c7def52f197379d10"} Feb 19 00:24:22 crc kubenswrapper[4825]: I0219 00:24:22.344096 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" event={"ID":"f0225b38-d01b-4ce0-98f7-7032c8719113","Type":"ContainerStarted","Data":"5bf369634385613f305e940222c4844094267e1acc5800cd3f12fd7531b9ddde"} Feb 19 00:24:22 crc kubenswrapper[4825]: I0219 00:24:22.347080 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" event={"ID":"c855158e-a4dc-467a-9d2b-923761e2cb45","Type":"ContainerStarted","Data":"d39738542744ac3c6b196fb08b6339dfa2e9d7d9e8e30161a0f7cf7a5aa658ac"} Feb 19 00:24:22 crc kubenswrapper[4825]: I0219 00:24:22.354001 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" podStartSLOduration=7.388261491 podStartE2EDuration="27.35397994s" podCreationTimestamp="2026-02-19 00:23:55 +0000 UTC" firstStartedPulling="2026-02-19 00:24:01.694691974 +0000 UTC m=+987.385658021" lastFinishedPulling="2026-02-19 00:24:21.660410423 +0000 UTC m=+1007.351376470" observedRunningTime="2026-02-19 00:24:22.347678241 +0000 UTC m=+1008.038644288" watchObservedRunningTime="2026-02-19 00:24:22.35397994 +0000 UTC m=+1008.044945997" Feb 19 00:24:22 crc kubenswrapper[4825]: I0219 00:24:22.383666 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-z4gnb" podStartSLOduration=4.383641975 podStartE2EDuration="4.383641975s" podCreationTimestamp="2026-02-19 00:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:24:22.381272911 +0000 UTC m=+1008.072238978" watchObservedRunningTime="2026-02-19 00:24:22.383641975 +0000 UTC m=+1008.074608022" Feb 19 00:24:22 crc kubenswrapper[4825]: I0219 00:24:22.411728 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" podStartSLOduration=3.278491316 podStartE2EDuration="34.411702057s" podCreationTimestamp="2026-02-19 00:23:48 +0000 UTC" firstStartedPulling="2026-02-19 00:23:50.606555708 +0000 UTC m=+976.297521745" lastFinishedPulling="2026-02-19 00:24:21.739766439 +0000 UTC m=+1007.430732486" observedRunningTime="2026-02-19 00:24:22.403879337 +0000 UTC m=+1008.094845384" watchObservedRunningTime="2026-02-19 00:24:22.411702057 +0000 UTC m=+1008.102668094" Feb 19 00:24:22 crc kubenswrapper[4825]: I0219 00:24:22.434811 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" podStartSLOduration=1.7829858459999999 podStartE2EDuration="17.434787865s" podCreationTimestamp="2026-02-19 00:24:05 +0000 UTC" firstStartedPulling="2026-02-19 00:24:05.991943167 +0000 UTC m=+991.682909214" lastFinishedPulling="2026-02-19 00:24:21.643745196 +0000 UTC m=+1007.334711233" observedRunningTime="2026-02-19 00:24:22.429832372 +0000 UTC m=+1008.120798419" watchObservedRunningTime="2026-02-19 00:24:22.434787865 +0000 UTC m=+1008.125753932" Feb 19 00:24:22 crc kubenswrapper[4825]: I0219 00:24:22.461353 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" podStartSLOduration=2.322749183 podStartE2EDuration="19.461325356s" podCreationTimestamp="2026-02-19 00:24:03 +0000 UTC" firstStartedPulling="2026-02-19 00:24:04.599948653 +0000 UTC m=+990.290914700" lastFinishedPulling="2026-02-19 00:24:21.738524826 +0000 UTC m=+1007.429490873" observedRunningTime="2026-02-19 00:24:22.454221976 +0000 UTC m=+1008.145188023" watchObservedRunningTime="2026-02-19 00:24:22.461325356 +0000 UTC m=+1008.152291403" Feb 19 00:24:22 crc kubenswrapper[4825]: I0219 00:24:22.475245 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" podStartSLOduration=4.849894038 podStartE2EDuration="32.475224019s" podCreationTimestamp="2026-02-19 00:23:50 +0000 UTC" firstStartedPulling="2026-02-19 00:23:54.114761887 +0000 UTC m=+979.805727944" lastFinishedPulling="2026-02-19 00:24:21.740091878 +0000 UTC m=+1007.431057925" observedRunningTime="2026-02-19 00:24:22.47265468 +0000 UTC m=+1008.163620747" watchObservedRunningTime="2026-02-19 00:24:22.475224019 +0000 UTC m=+1008.166190056" Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.074873 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5872ac9-2a95-4315-b1c9-f31ad523fc98" path="/var/lib/kubelet/pods/c5872ac9-2a95-4315-b1c9-f31ad523fc98/volumes" Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.358559 4825 generic.go:334] "Generic (PLEG): container finished" podID="f0225b38-d01b-4ce0-98f7-7032c8719113" containerID="5bf369634385613f305e940222c4844094267e1acc5800cd3f12fd7531b9ddde" exitCode=0 Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.358653 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" event={"ID":"f0225b38-d01b-4ce0-98f7-7032c8719113","Type":"ContainerDied","Data":"5bf369634385613f305e940222c4844094267e1acc5800cd3f12fd7531b9ddde"} Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.358721 4825 scope.go:117] "RemoveContainer" containerID="4fadecfdbf7780a8574b1c34424c462895be71e082489d9fab6b3a0ebfdacea2" Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.359244 4825 scope.go:117] "RemoveContainer" containerID="5bf369634385613f305e940222c4844094267e1acc5800cd3f12fd7531b9ddde" Feb 19 00:24:23 crc kubenswrapper[4825]: E0219 00:24:23.359524 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7_service-telemetry(f0225b38-d01b-4ce0-98f7-7032c8719113)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" podUID="f0225b38-d01b-4ce0-98f7-7032c8719113" Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.366700 4825 generic.go:334] "Generic (PLEG): container finished" podID="c855158e-a4dc-467a-9d2b-923761e2cb45" containerID="d39738542744ac3c6b196fb08b6339dfa2e9d7d9e8e30161a0f7cf7a5aa658ac" exitCode=0 Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.366799 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" event={"ID":"c855158e-a4dc-467a-9d2b-923761e2cb45","Type":"ContainerDied","Data":"d39738542744ac3c6b196fb08b6339dfa2e9d7d9e8e30161a0f7cf7a5aa658ac"} Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.367341 4825 scope.go:117] "RemoveContainer" containerID="d39738542744ac3c6b196fb08b6339dfa2e9d7d9e8e30161a0f7cf7a5aa658ac" Feb 19 00:24:23 crc kubenswrapper[4825]: E0219 00:24:23.367574 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7996dc9458-789zq_service-telemetry(c855158e-a4dc-467a-9d2b-923761e2cb45)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" podUID="c855158e-a4dc-467a-9d2b-923761e2cb45" Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.370843 4825 generic.go:334] "Generic (PLEG): container finished" podID="ff2e035a-3702-489c-ad4b-b2892b4e8ac9" containerID="45b1b701eadcade95441859985b33e3e8f77ee4ce05479a4fa8a081aa7f75adf" exitCode=0 Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.370888 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" event={"ID":"ff2e035a-3702-489c-ad4b-b2892b4e8ac9","Type":"ContainerDied","Data":"45b1b701eadcade95441859985b33e3e8f77ee4ce05479a4fa8a081aa7f75adf"} Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.371595 4825 scope.go:117] "RemoveContainer" containerID="45b1b701eadcade95441859985b33e3e8f77ee4ce05479a4fa8a081aa7f75adf" Feb 19 00:24:23 crc kubenswrapper[4825]: E0219 00:24:23.371835 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh_service-telemetry(ff2e035a-3702-489c-ad4b-b2892b4e8ac9)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" podUID="ff2e035a-3702-489c-ad4b-b2892b4e8ac9" Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.374094 4825 generic.go:334] "Generic (PLEG): container finished" podID="af8d216a-4e33-4378-a149-fcfa67478d93" containerID="2d35e78e137be37caad85e7d06a235f8e17f04a75e4094a90cd9124c4eafddb8" exitCode=0 Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.374150 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" event={"ID":"af8d216a-4e33-4378-a149-fcfa67478d93","Type":"ContainerDied","Data":"2d35e78e137be37caad85e7d06a235f8e17f04a75e4094a90cd9124c4eafddb8"} Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.374720 4825 scope.go:117] "RemoveContainer" containerID="2d35e78e137be37caad85e7d06a235f8e17f04a75e4094a90cd9124c4eafddb8" Feb 19 00:24:23 crc kubenswrapper[4825]: E0219 00:24:23.374949 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2_service-telemetry(af8d216a-4e33-4378-a149-fcfa67478d93)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" podUID="af8d216a-4e33-4378-a149-fcfa67478d93" Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.381969 4825 generic.go:334] "Generic (PLEG): container finished" podID="1deef3e8-3d46-4e72-a60e-3d5166dc6a4b" containerID="3d1200489102c65ef974d101f6157ff66e3b530057c27f5c7def52f197379d10" exitCode=0 Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.382038 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" event={"ID":"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b","Type":"ContainerDied","Data":"3d1200489102c65ef974d101f6157ff66e3b530057c27f5c7def52f197379d10"} Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.382825 4825 scope.go:117] "RemoveContainer" containerID="3d1200489102c65ef974d101f6157ff66e3b530057c27f5c7def52f197379d10" Feb 19 00:24:23 crc kubenswrapper[4825]: E0219 00:24:23.383083 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd_service-telemetry(1deef3e8-3d46-4e72-a60e-3d5166dc6a4b)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" podUID="1deef3e8-3d46-4e72-a60e-3d5166dc6a4b" Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.450047 4825 scope.go:117] "RemoveContainer" containerID="7185a536b97eb84c0e6f87a2b3e263bd8202e5d2569476a2d3f87f87d3b12f43" Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.499723 4825 scope.go:117] "RemoveContainer" containerID="f10ed9796fbaec437cfa8ced511827cef81a2866e749b0c1fcc7313b4d64a76e" Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.552760 4825 scope.go:117] "RemoveContainer" containerID="6845113832a467d855e6e224ce9d0c1605694ca9b32eff18cdb24e9c0690f948" Feb 19 00:24:23 crc kubenswrapper[4825]: I0219 00:24:23.593213 4825 scope.go:117] "RemoveContainer" containerID="8ec4a3c3fc9e22953f56fc82a629392ad83a2311793aefaff906c5f05690a7f3" Feb 19 00:24:24 crc kubenswrapper[4825]: I0219 00:24:24.393316 4825 scope.go:117] "RemoveContainer" containerID="5bf369634385613f305e940222c4844094267e1acc5800cd3f12fd7531b9ddde" Feb 19 00:24:24 crc kubenswrapper[4825]: E0219 00:24:24.393702 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7_service-telemetry(f0225b38-d01b-4ce0-98f7-7032c8719113)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" podUID="f0225b38-d01b-4ce0-98f7-7032c8719113" Feb 19 00:24:24 crc kubenswrapper[4825]: I0219 00:24:24.395200 4825 scope.go:117] "RemoveContainer" containerID="d39738542744ac3c6b196fb08b6339dfa2e9d7d9e8e30161a0f7cf7a5aa658ac" Feb 19 00:24:24 crc kubenswrapper[4825]: E0219 00:24:24.395397 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7996dc9458-789zq_service-telemetry(c855158e-a4dc-467a-9d2b-923761e2cb45)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" podUID="c855158e-a4dc-467a-9d2b-923761e2cb45" Feb 19 00:24:24 crc kubenswrapper[4825]: I0219 00:24:24.398087 4825 scope.go:117] "RemoveContainer" containerID="45b1b701eadcade95441859985b33e3e8f77ee4ce05479a4fa8a081aa7f75adf" Feb 19 00:24:24 crc kubenswrapper[4825]: E0219 00:24:24.398266 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh_service-telemetry(ff2e035a-3702-489c-ad4b-b2892b4e8ac9)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" podUID="ff2e035a-3702-489c-ad4b-b2892b4e8ac9" Feb 19 00:24:24 crc kubenswrapper[4825]: I0219 00:24:24.400322 4825 scope.go:117] "RemoveContainer" containerID="2d35e78e137be37caad85e7d06a235f8e17f04a75e4094a90cd9124c4eafddb8" Feb 19 00:24:24 crc kubenswrapper[4825]: E0219 00:24:24.400536 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2_service-telemetry(af8d216a-4e33-4378-a149-fcfa67478d93)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" podUID="af8d216a-4e33-4378-a149-fcfa67478d93" Feb 19 00:24:24 crc kubenswrapper[4825]: I0219 00:24:24.402484 4825 scope.go:117] "RemoveContainer" containerID="3d1200489102c65ef974d101f6157ff66e3b530057c27f5c7def52f197379d10" Feb 19 00:24:24 crc kubenswrapper[4825]: E0219 00:24:24.402692 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd_service-telemetry(1deef3e8-3d46-4e72-a60e-3d5166dc6a4b)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" podUID="1deef3e8-3d46-4e72-a60e-3d5166dc6a4b" Feb 19 00:24:28 crc kubenswrapper[4825]: I0219 00:24:28.823253 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:24:28 crc kubenswrapper[4825]: I0219 00:24:28.823776 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:24:31 crc kubenswrapper[4825]: I0219 00:24:31.899607 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Feb 19 00:24:31 crc kubenswrapper[4825]: I0219 00:24:31.902145 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 19 00:24:31 crc kubenswrapper[4825]: I0219 00:24:31.905608 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Feb 19 00:24:31 crc kubenswrapper[4825]: I0219 00:24:31.906822 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 19 00:24:31 crc kubenswrapper[4825]: I0219 00:24:31.913863 4825 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Feb 19 00:24:32 crc kubenswrapper[4825]: I0219 00:24:32.027911 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/10111f12-b669-4417-a698-2691e21e5c62-qdr-test-config\") pod \"qdr-test\" (UID: \"10111f12-b669-4417-a698-2691e21e5c62\") " pod="service-telemetry/qdr-test" Feb 19 00:24:32 crc kubenswrapper[4825]: I0219 00:24:32.028427 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/10111f12-b669-4417-a698-2691e21e5c62-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"10111f12-b669-4417-a698-2691e21e5c62\") " pod="service-telemetry/qdr-test" Feb 19 00:24:32 crc kubenswrapper[4825]: I0219 00:24:32.028493 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ktwn\" (UniqueName: \"kubernetes.io/projected/10111f12-b669-4417-a698-2691e21e5c62-kube-api-access-7ktwn\") pod \"qdr-test\" (UID: \"10111f12-b669-4417-a698-2691e21e5c62\") " pod="service-telemetry/qdr-test" Feb 19 00:24:32 crc kubenswrapper[4825]: I0219 00:24:32.129683 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ktwn\" (UniqueName: \"kubernetes.io/projected/10111f12-b669-4417-a698-2691e21e5c62-kube-api-access-7ktwn\") pod \"qdr-test\" (UID: \"10111f12-b669-4417-a698-2691e21e5c62\") " pod="service-telemetry/qdr-test" Feb 19 00:24:32 crc kubenswrapper[4825]: I0219 00:24:32.130914 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/10111f12-b669-4417-a698-2691e21e5c62-qdr-test-config\") pod \"qdr-test\" (UID: \"10111f12-b669-4417-a698-2691e21e5c62\") " pod="service-telemetry/qdr-test" Feb 19 00:24:32 crc kubenswrapper[4825]: I0219 00:24:32.129884 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/10111f12-b669-4417-a698-2691e21e5c62-qdr-test-config\") pod \"qdr-test\" (UID: \"10111f12-b669-4417-a698-2691e21e5c62\") " pod="service-telemetry/qdr-test" Feb 19 00:24:32 crc kubenswrapper[4825]: I0219 00:24:32.131041 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/10111f12-b669-4417-a698-2691e21e5c62-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"10111f12-b669-4417-a698-2691e21e5c62\") " pod="service-telemetry/qdr-test" Feb 19 00:24:32 crc kubenswrapper[4825]: I0219 00:24:32.138294 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/10111f12-b669-4417-a698-2691e21e5c62-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"10111f12-b669-4417-a698-2691e21e5c62\") " pod="service-telemetry/qdr-test" Feb 19 00:24:32 crc kubenswrapper[4825]: I0219 00:24:32.165171 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ktwn\" (UniqueName: \"kubernetes.io/projected/10111f12-b669-4417-a698-2691e21e5c62-kube-api-access-7ktwn\") pod \"qdr-test\" (UID: \"10111f12-b669-4417-a698-2691e21e5c62\") " pod="service-telemetry/qdr-test" Feb 19 00:24:32 crc kubenswrapper[4825]: I0219 00:24:32.248609 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 19 00:24:32 crc kubenswrapper[4825]: I0219 00:24:32.471919 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 19 00:24:33 crc kubenswrapper[4825]: I0219 00:24:33.470082 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"10111f12-b669-4417-a698-2691e21e5c62","Type":"ContainerStarted","Data":"108a62495d11d2fc52025c780baf8c2a54018486c0bc6bc51188defd468e5143"} Feb 19 00:24:36 crc kubenswrapper[4825]: I0219 00:24:36.066082 4825 scope.go:117] "RemoveContainer" containerID="3d1200489102c65ef974d101f6157ff66e3b530057c27f5c7def52f197379d10" Feb 19 00:24:37 crc kubenswrapper[4825]: I0219 00:24:37.066496 4825 scope.go:117] "RemoveContainer" containerID="45b1b701eadcade95441859985b33e3e8f77ee4ce05479a4fa8a081aa7f75adf" Feb 19 00:24:38 crc kubenswrapper[4825]: I0219 00:24:38.065692 4825 scope.go:117] "RemoveContainer" containerID="2d35e78e137be37caad85e7d06a235f8e17f04a75e4094a90cd9124c4eafddb8" Feb 19 00:24:39 crc kubenswrapper[4825]: I0219 00:24:39.066572 4825 scope.go:117] "RemoveContainer" containerID="5bf369634385613f305e940222c4844094267e1acc5800cd3f12fd7531b9ddde" Feb 19 00:24:40 crc kubenswrapper[4825]: I0219 00:24:40.066486 4825 scope.go:117] "RemoveContainer" containerID="d39738542744ac3c6b196fb08b6339dfa2e9d7d9e8e30161a0f7cf7a5aa658ac" Feb 19 00:24:42 crc kubenswrapper[4825]: I0219 00:24:42.544447 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh" event={"ID":"ff2e035a-3702-489c-ad4b-b2892b4e8ac9","Type":"ContainerStarted","Data":"2afa98efa25f700525feeb5b3ec5fafb66f402b5e7ce1a108fbd588cf308c626"} Feb 19 00:24:42 crc kubenswrapper[4825]: I0219 00:24:42.547676 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7" event={"ID":"f0225b38-d01b-4ce0-98f7-7032c8719113","Type":"ContainerStarted","Data":"37a4d2e32c1cbbdb4c1a8f7fd7fc25aa4becbcbc0146e1de5c69021f1c36555e"} Feb 19 00:24:42 crc kubenswrapper[4825]: I0219 00:24:42.549299 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"10111f12-b669-4417-a698-2691e21e5c62","Type":"ContainerStarted","Data":"20951339103ce4abca5eb252c7be9fd4573145acd3b503a1ce7d7e45660c41b7"} Feb 19 00:24:42 crc kubenswrapper[4825]: I0219 00:24:42.605901 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.478943271 podStartE2EDuration="11.605864248s" podCreationTimestamp="2026-02-19 00:24:31 +0000 UTC" firstStartedPulling="2026-02-19 00:24:32.478992273 +0000 UTC m=+1018.169958320" lastFinishedPulling="2026-02-19 00:24:41.60591325 +0000 UTC m=+1027.296879297" observedRunningTime="2026-02-19 00:24:42.603928757 +0000 UTC m=+1028.294894804" watchObservedRunningTime="2026-02-19 00:24:42.605864248 +0000 UTC m=+1028.296830295" Feb 19 00:24:42 crc kubenswrapper[4825]: I0219 00:24:42.918410 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-4bj7d"] Feb 19 00:24:42 crc kubenswrapper[4825]: I0219 00:24:42.920304 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:42 crc kubenswrapper[4825]: I0219 00:24:42.923007 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Feb 19 00:24:42 crc kubenswrapper[4825]: I0219 00:24:42.923329 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Feb 19 00:24:42 crc kubenswrapper[4825]: I0219 00:24:42.924113 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Feb 19 00:24:42 crc kubenswrapper[4825]: I0219 00:24:42.924166 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Feb 19 00:24:42 crc kubenswrapper[4825]: I0219 00:24:42.924302 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Feb 19 00:24:42 crc kubenswrapper[4825]: I0219 00:24:42.925624 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Feb 19 00:24:42 crc kubenswrapper[4825]: I0219 00:24:42.937456 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-4bj7d"] Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.025353 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-ceilometer-publisher\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.025420 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-sensubility-config\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.025459 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.025563 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-collectd-config\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.025623 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.025646 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-healthcheck-log\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.025670 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh68m\" (UniqueName: \"kubernetes.io/projected/bcaeb65a-09dc-4540-a9aa-897ed34b859f-kube-api-access-jh68m\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.127917 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh68m\" (UniqueName: \"kubernetes.io/projected/bcaeb65a-09dc-4540-a9aa-897ed34b859f-kube-api-access-jh68m\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.127981 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-ceilometer-publisher\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.128008 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-sensubility-config\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.128038 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.128122 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-collectd-config\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.128181 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.128206 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-healthcheck-log\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.129100 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-ceilometer-publisher\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.129205 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.129207 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-healthcheck-log\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.129707 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-collectd-config\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.129710 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-sensubility-config\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.129954 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.154033 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh68m\" (UniqueName: \"kubernetes.io/projected/bcaeb65a-09dc-4540-a9aa-897ed34b859f-kube-api-access-jh68m\") pod \"stf-smoketest-smoke1-4bj7d\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.214552 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.216617 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.227247 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.256538 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.333213 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlkqs\" (UniqueName: \"kubernetes.io/projected/25a5900c-a3f9-42cc-857f-a4d18d23a6bc-kube-api-access-wlkqs\") pod \"curl\" (UID: \"25a5900c-a3f9-42cc-857f-a4d18d23a6bc\") " pod="service-telemetry/curl" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.434743 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlkqs\" (UniqueName: \"kubernetes.io/projected/25a5900c-a3f9-42cc-857f-a4d18d23a6bc-kube-api-access-wlkqs\") pod \"curl\" (UID: \"25a5900c-a3f9-42cc-857f-a4d18d23a6bc\") " pod="service-telemetry/curl" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.474108 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlkqs\" (UniqueName: \"kubernetes.io/projected/25a5900c-a3f9-42cc-857f-a4d18d23a6bc-kube-api-access-wlkqs\") pod \"curl\" (UID: \"25a5900c-a3f9-42cc-857f-a4d18d23a6bc\") " pod="service-telemetry/curl" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.543783 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 19 00:24:43 crc kubenswrapper[4825]: I0219 00:24:43.736681 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-4bj7d"] Feb 19 00:24:43 crc kubenswrapper[4825]: W0219 00:24:43.758880 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcaeb65a_09dc_4540_a9aa_897ed34b859f.slice/crio-78a04bd68e27724402ab2c2c1378a31ab2b4133639a6e9cf324d5b7efd2be4bf WatchSource:0}: Error finding container 78a04bd68e27724402ab2c2c1378a31ab2b4133639a6e9cf324d5b7efd2be4bf: Status 404 returned error can't find the container with id 78a04bd68e27724402ab2c2c1378a31ab2b4133639a6e9cf324d5b7efd2be4bf Feb 19 00:24:44 crc kubenswrapper[4825]: I0219 00:24:44.008882 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 19 00:24:44 crc kubenswrapper[4825]: W0219 00:24:44.011754 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25a5900c_a3f9_42cc_857f_a4d18d23a6bc.slice/crio-a3d36f14b2f08bbb922113c738e837522d56681146a72b8e75d349d702f82a96 WatchSource:0}: Error finding container a3d36f14b2f08bbb922113c738e837522d56681146a72b8e75d349d702f82a96: Status 404 returned error can't find the container with id a3d36f14b2f08bbb922113c738e837522d56681146a72b8e75d349d702f82a96 Feb 19 00:24:44 crc kubenswrapper[4825]: I0219 00:24:44.565004 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd" event={"ID":"1deef3e8-3d46-4e72-a60e-3d5166dc6a4b","Type":"ContainerStarted","Data":"939bf80e3c461e53d5bf58c27f7238e5169a774a1d7a154144f2f07d70d2c733"} Feb 19 00:24:44 crc kubenswrapper[4825]: I0219 00:24:44.566323 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"25a5900c-a3f9-42cc-857f-a4d18d23a6bc","Type":"ContainerStarted","Data":"a3d36f14b2f08bbb922113c738e837522d56681146a72b8e75d349d702f82a96"} Feb 19 00:24:44 crc kubenswrapper[4825]: I0219 00:24:44.568528 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2" event={"ID":"af8d216a-4e33-4378-a149-fcfa67478d93","Type":"ContainerStarted","Data":"9f1b4d07efd0266ed93f15d1146ec8efa28ed02396aff2a810712f3dff12d30f"} Feb 19 00:24:44 crc kubenswrapper[4825]: I0219 00:24:44.571239 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-789zq" event={"ID":"c855158e-a4dc-467a-9d2b-923761e2cb45","Type":"ContainerStarted","Data":"0861a854b4eb4ba28e5df69285dc51152703cc752b0d8bb0b0e609fe0d602b1a"} Feb 19 00:24:44 crc kubenswrapper[4825]: I0219 00:24:44.572373 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-4bj7d" event={"ID":"bcaeb65a-09dc-4540-a9aa-897ed34b859f","Type":"ContainerStarted","Data":"78a04bd68e27724402ab2c2c1378a31ab2b4133639a6e9cf324d5b7efd2be4bf"} Feb 19 00:24:57 crc kubenswrapper[4825]: E0219 00:24:57.141293 4825 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/busyboxplus:curl" Feb 19 00:24:57 crc kubenswrapper[4825]: E0219 00:24:57.142210 4825 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:curl,Image:quay.io/infrawatch/busyboxplus:curl,Command:[],Args:[sh -c curl -v -k -H \"Content-Type: application/json\" -H \"Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6InF6SnFxNFFjbVk5VmJQZ2dNMmUxdHFmTlJlVWx4UDhSTlhIamV3RUx4WU0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTc3MTQ2NDI4MiwiaWF0IjoxNzcxNDYwNjgyLCJpc3MiOiJodHRwczovL2t1YmVybmV0ZXMuZGVmYXVsdC5zdmMiLCJqdGkiOiIxYjg2ZjUyMy1iZmZhLTQxNTQtYjRkZi03ZTlkMWE3YzAzZWIiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6InNlcnZpY2UtdGVsZW1ldHJ5Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6InByb21ldGhldXMtc3RmIiwidWlkIjoiYzQ1MWQ2MDUtYzU3ZS00MmNhLWE5MzktNDNjZjVmNTg4MWVkIn19LCJuYmYiOjE3NzE0NjA2ODIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpzZXJ2aWNlLXRlbGVtZXRyeTpwcm9tZXRoZXVzLXN0ZiJ9.KGDn5Phex2ApKPkAfXNxdWZx9syNci8N3VdPy8Z5zqeuOFxRk_CVY3mICEcTgSbaQWiv2HqDsBlVvPOaE1Tvvzuyw5hBdP3tuBGcSGl6HG9gH-XaJiGoVPxIpNJn6sTGUwDtyXgRq0wJHvidGuI9RWIWVloa2i-c3MpFqwN3WkOXxv3GIAGrj_ouVorBpvFGXLVwtsPfT2ZM3YCt2uT0BXG4RECfv7sIEK8i9UizBWRaNxSQhgZ_4rb5zvLyWSiYDUzDz-KgxMuzhJoRTDjF2kQbP73l8XjaxMrRCxNPYxuXXaZ7Tkx0uKsiibAXoc4O9paVVWiIe094rkTVaPMsuKAc49a9m5S3Jjoh5HBWImwnuwFFbcGxhFOOAQqBnpF4RRt6YSTt3XhB49ykdURZMJui0sVZcL955QlOwT4Exwb_wnn4wVsX0T4ijUSuPkbc0rM-3iyqooW1WK_PiZrzlZRX15Ohy0WLROItsmKXzOxPaROPU590rAP3UdRDveO7sRcZKettWxrwLFZECvF7S4Uk3cFiX_sHHy-EjNQdsZpbfftPb6eCP3ZVRC2T9JhMi_0gb_TL_ZRsNFGGxra7DUvlXH2xBVjrnOErDFgGW1KGIAFdqYmh_09a8gTxeNFicruTsgnZc7JelCWFl_tXcXFGkUBr5TciC9sDOd93zMc\" -d '[{\"status\":\"firing\",\"labels\":{\"alertname\":\"smoketest\",\"severity\":\"warning\"},\"startsAt\":\"2026-02-19T00:24:43+00:00\"}]' https://default-alertmanager-proxy:9095/api/v2/alerts],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wlkqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod curl_service-telemetry(25a5900c-a3f9-42cc-857f-a4d18d23a6bc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 00:24:57 crc kubenswrapper[4825]: E0219 00:24:57.143786 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"curl\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/curl" podUID="25a5900c-a3f9-42cc-857f-a4d18d23a6bc" Feb 19 00:24:57 crc kubenswrapper[4825]: E0219 00:24:57.721499 4825 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"curl\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/busyboxplus:curl\\\"\"" pod="service-telemetry/curl" podUID="25a5900c-a3f9-42cc-857f-a4d18d23a6bc" Feb 19 00:24:58 crc kubenswrapper[4825]: I0219 00:24:58.823672 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:24:58 crc kubenswrapper[4825]: I0219 00:24:58.823782 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:24:58 crc kubenswrapper[4825]: I0219 00:24:58.823862 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:24:58 crc kubenswrapper[4825]: I0219 00:24:58.824768 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f27439f26e1215161e345d1607b94ea7543caa15aa262252676acb2360916f66"} pod="openshift-machine-config-operator/machine-config-daemon-tggq9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:24:58 crc kubenswrapper[4825]: I0219 00:24:58.824849 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" containerID="cri-o://f27439f26e1215161e345d1607b94ea7543caa15aa262252676acb2360916f66" gracePeriod=600 Feb 19 00:24:59 crc kubenswrapper[4825]: I0219 00:24:59.749588 4825 generic.go:334] "Generic (PLEG): container finished" podID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerID="f27439f26e1215161e345d1607b94ea7543caa15aa262252676acb2360916f66" exitCode=0 Feb 19 00:24:59 crc kubenswrapper[4825]: I0219 00:24:59.749677 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerDied","Data":"f27439f26e1215161e345d1607b94ea7543caa15aa262252676acb2360916f66"} Feb 19 00:24:59 crc kubenswrapper[4825]: I0219 00:24:59.750428 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerStarted","Data":"ccd80065e19d2d5ef4a9f01395c7a194c23ca4e0993e524d1c929084ce7402c2"} Feb 19 00:24:59 crc kubenswrapper[4825]: I0219 00:24:59.750450 4825 scope.go:117] "RemoveContainer" containerID="f4528630abd9298fa6ddba9ae1d069773d3681c2d7b7aa972cb3ffb2f6b64f7c" Feb 19 00:24:59 crc kubenswrapper[4825]: I0219 00:24:59.754540 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-4bj7d" event={"ID":"bcaeb65a-09dc-4540-a9aa-897ed34b859f","Type":"ContainerStarted","Data":"8700a9bf06ff3ce5350947ad0f344af03e16bc565a5aa7e92216256e1d4854f6"} Feb 19 00:25:05 crc kubenswrapper[4825]: I0219 00:25:05.817937 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-4bj7d" event={"ID":"bcaeb65a-09dc-4540-a9aa-897ed34b859f","Type":"ContainerStarted","Data":"6916b77cd9e9a82ec01332470b06e67e1dbace1ceb4863780de3d40fe36b0abf"} Feb 19 00:25:05 crc kubenswrapper[4825]: I0219 00:25:05.836629 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-4bj7d" podStartSLOduration=2.415775713 podStartE2EDuration="23.836602898s" podCreationTimestamp="2026-02-19 00:24:42 +0000 UTC" firstStartedPulling="2026-02-19 00:24:43.763944875 +0000 UTC m=+1029.454910922" lastFinishedPulling="2026-02-19 00:25:05.18477207 +0000 UTC m=+1050.875738107" observedRunningTime="2026-02-19 00:25:05.836345712 +0000 UTC m=+1051.527311759" watchObservedRunningTime="2026-02-19 00:25:05.836602898 +0000 UTC m=+1051.527568945" Feb 19 00:25:13 crc kubenswrapper[4825]: I0219 00:25:13.694427 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-78bcbbdcff-c2z9k_b7420f29-5b9e-4801-9ad8-1ed62185e445/prometheus-webhook-snmp/0.log" Feb 19 00:25:13 crc kubenswrapper[4825]: I0219 00:25:13.886972 4825 generic.go:334] "Generic (PLEG): container finished" podID="25a5900c-a3f9-42cc-857f-a4d18d23a6bc" containerID="e2eaeef49aa290c0d7bbe17bb8dcafbe4e3f8bd73fcc805aa3dd57747ef9ed33" exitCode=0 Feb 19 00:25:13 crc kubenswrapper[4825]: I0219 00:25:13.887046 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"25a5900c-a3f9-42cc-857f-a4d18d23a6bc","Type":"ContainerDied","Data":"e2eaeef49aa290c0d7bbe17bb8dcafbe4e3f8bd73fcc805aa3dd57747ef9ed33"} Feb 19 00:25:15 crc kubenswrapper[4825]: I0219 00:25:15.235686 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 19 00:25:15 crc kubenswrapper[4825]: I0219 00:25:15.333308 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlkqs\" (UniqueName: \"kubernetes.io/projected/25a5900c-a3f9-42cc-857f-a4d18d23a6bc-kube-api-access-wlkqs\") pod \"25a5900c-a3f9-42cc-857f-a4d18d23a6bc\" (UID: \"25a5900c-a3f9-42cc-857f-a4d18d23a6bc\") " Feb 19 00:25:15 crc kubenswrapper[4825]: I0219 00:25:15.340907 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a5900c-a3f9-42cc-857f-a4d18d23a6bc-kube-api-access-wlkqs" (OuterVolumeSpecName: "kube-api-access-wlkqs") pod "25a5900c-a3f9-42cc-857f-a4d18d23a6bc" (UID: "25a5900c-a3f9-42cc-857f-a4d18d23a6bc"). InnerVolumeSpecName "kube-api-access-wlkqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:25:15 crc kubenswrapper[4825]: I0219 00:25:15.436012 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlkqs\" (UniqueName: \"kubernetes.io/projected/25a5900c-a3f9-42cc-857f-a4d18d23a6bc-kube-api-access-wlkqs\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:15 crc kubenswrapper[4825]: I0219 00:25:15.913061 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"25a5900c-a3f9-42cc-857f-a4d18d23a6bc","Type":"ContainerDied","Data":"a3d36f14b2f08bbb922113c738e837522d56681146a72b8e75d349d702f82a96"} Feb 19 00:25:15 crc kubenswrapper[4825]: I0219 00:25:15.913117 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3d36f14b2f08bbb922113c738e837522d56681146a72b8e75d349d702f82a96" Feb 19 00:25:15 crc kubenswrapper[4825]: I0219 00:25:15.913680 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 19 00:25:33 crc kubenswrapper[4825]: I0219 00:25:33.057126 4825 generic.go:334] "Generic (PLEG): container finished" podID="bcaeb65a-09dc-4540-a9aa-897ed34b859f" containerID="8700a9bf06ff3ce5350947ad0f344af03e16bc565a5aa7e92216256e1d4854f6" exitCode=0 Feb 19 00:25:33 crc kubenswrapper[4825]: I0219 00:25:33.057217 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-4bj7d" event={"ID":"bcaeb65a-09dc-4540-a9aa-897ed34b859f","Type":"ContainerDied","Data":"8700a9bf06ff3ce5350947ad0f344af03e16bc565a5aa7e92216256e1d4854f6"} Feb 19 00:25:33 crc kubenswrapper[4825]: I0219 00:25:33.058654 4825 scope.go:117] "RemoveContainer" containerID="8700a9bf06ff3ce5350947ad0f344af03e16bc565a5aa7e92216256e1d4854f6" Feb 19 00:25:37 crc kubenswrapper[4825]: I0219 00:25:37.117224 4825 generic.go:334] "Generic (PLEG): container finished" podID="bcaeb65a-09dc-4540-a9aa-897ed34b859f" containerID="6916b77cd9e9a82ec01332470b06e67e1dbace1ceb4863780de3d40fe36b0abf" exitCode=0 Feb 19 00:25:37 crc kubenswrapper[4825]: I0219 00:25:37.117268 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-4bj7d" event={"ID":"bcaeb65a-09dc-4540-a9aa-897ed34b859f","Type":"ContainerDied","Data":"6916b77cd9e9a82ec01332470b06e67e1dbace1ceb4863780de3d40fe36b0abf"} Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.445369 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.542162 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-collectd-config\") pod \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.542456 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-sensubility-config\") pod \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.542576 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-ceilometer-entrypoint-script\") pod \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.542691 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-healthcheck-log\") pod \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.542856 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh68m\" (UniqueName: \"kubernetes.io/projected/bcaeb65a-09dc-4540-a9aa-897ed34b859f-kube-api-access-jh68m\") pod \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.543116 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-collectd-entrypoint-script\") pod \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.543210 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-ceilometer-publisher\") pod \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\" (UID: \"bcaeb65a-09dc-4540-a9aa-897ed34b859f\") " Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.564104 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcaeb65a-09dc-4540-a9aa-897ed34b859f-kube-api-access-jh68m" (OuterVolumeSpecName: "kube-api-access-jh68m") pod "bcaeb65a-09dc-4540-a9aa-897ed34b859f" (UID: "bcaeb65a-09dc-4540-a9aa-897ed34b859f"). InnerVolumeSpecName "kube-api-access-jh68m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.567960 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "bcaeb65a-09dc-4540-a9aa-897ed34b859f" (UID: "bcaeb65a-09dc-4540-a9aa-897ed34b859f"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.577816 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "bcaeb65a-09dc-4540-a9aa-897ed34b859f" (UID: "bcaeb65a-09dc-4540-a9aa-897ed34b859f"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.577959 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "bcaeb65a-09dc-4540-a9aa-897ed34b859f" (UID: "bcaeb65a-09dc-4540-a9aa-897ed34b859f"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.578421 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "bcaeb65a-09dc-4540-a9aa-897ed34b859f" (UID: "bcaeb65a-09dc-4540-a9aa-897ed34b859f"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.579215 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "bcaeb65a-09dc-4540-a9aa-897ed34b859f" (UID: "bcaeb65a-09dc-4540-a9aa-897ed34b859f"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.579409 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "bcaeb65a-09dc-4540-a9aa-897ed34b859f" (UID: "bcaeb65a-09dc-4540-a9aa-897ed34b859f"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.645447 4825 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.645490 4825 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.645517 4825 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.645533 4825 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.645553 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh68m\" (UniqueName: \"kubernetes.io/projected/bcaeb65a-09dc-4540-a9aa-897ed34b859f-kube-api-access-jh68m\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.645568 4825 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:38 crc kubenswrapper[4825]: I0219 00:25:38.645581 4825 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/bcaeb65a-09dc-4540-a9aa-897ed34b859f-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:39 crc kubenswrapper[4825]: I0219 00:25:39.154028 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-4bj7d" event={"ID":"bcaeb65a-09dc-4540-a9aa-897ed34b859f","Type":"ContainerDied","Data":"78a04bd68e27724402ab2c2c1378a31ab2b4133639a6e9cf324d5b7efd2be4bf"} Feb 19 00:25:39 crc kubenswrapper[4825]: I0219 00:25:39.154489 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78a04bd68e27724402ab2c2c1378a31ab2b4133639a6e9cf324d5b7efd2be4bf" Feb 19 00:25:39 crc kubenswrapper[4825]: I0219 00:25:39.154667 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-4bj7d" Feb 19 00:25:43 crc kubenswrapper[4825]: I0219 00:25:43.847905 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-78bcbbdcff-c2z9k_b7420f29-5b9e-4801-9ad8-1ed62185e445/prometheus-webhook-snmp/0.log" Feb 19 00:25:44 crc kubenswrapper[4825]: I0219 00:25:44.967836 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-4bj7d_bcaeb65a-09dc-4540-a9aa-897ed34b859f/smoketest-collectd/0.log" Feb 19 00:25:45 crc kubenswrapper[4825]: I0219 00:25:45.210719 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-4bj7d_bcaeb65a-09dc-4540-a9aa-897ed34b859f/smoketest-ceilometer/0.log" Feb 19 00:25:45 crc kubenswrapper[4825]: I0219 00:25:45.416262 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-z4gnb_a10b7ca9-ed06-411b-a849-c9b70b89c7bc/default-interconnect/0.log" Feb 19 00:25:45 crc kubenswrapper[4825]: I0219 00:25:45.619458 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7996dc9458-789zq_c855158e-a4dc-467a-9d2b-923761e2cb45/bridge/2.log" Feb 19 00:25:45 crc kubenswrapper[4825]: I0219 00:25:45.891976 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7996dc9458-789zq_c855158e-a4dc-467a-9d2b-923761e2cb45/sg-core/0.log" Feb 19 00:25:46 crc kubenswrapper[4825]: I0219 00:25:46.194827 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7_f0225b38-d01b-4ce0-98f7-7032c8719113/bridge/2.log" Feb 19 00:25:46 crc kubenswrapper[4825]: I0219 00:25:46.434264 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-5d78f9d678-q7fb7_f0225b38-d01b-4ce0-98f7-7032c8719113/sg-core/0.log" Feb 19 00:25:46 crc kubenswrapper[4825]: I0219 00:25:46.671201 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd_1deef3e8-3d46-4e72-a60e-3d5166dc6a4b/bridge/2.log" Feb 19 00:25:46 crc kubenswrapper[4825]: I0219 00:25:46.901878 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-b57f974ff-2s7vd_1deef3e8-3d46-4e72-a60e-3d5166dc6a4b/sg-core/0.log" Feb 19 00:25:47 crc kubenswrapper[4825]: I0219 00:25:47.199926 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2_af8d216a-4e33-4378-a149-fcfa67478d93/bridge/2.log" Feb 19 00:25:47 crc kubenswrapper[4825]: I0219 00:25:47.439617 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-5c598fc64d-cdvp2_af8d216a-4e33-4378-a149-fcfa67478d93/sg-core/0.log" Feb 19 00:25:47 crc kubenswrapper[4825]: I0219 00:25:47.673339 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh_ff2e035a-3702-489c-ad4b-b2892b4e8ac9/bridge/2.log" Feb 19 00:25:47 crc kubenswrapper[4825]: I0219 00:25:47.950344 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-6864f4fb65-87rmh_ff2e035a-3702-489c-ad4b-b2892b4e8ac9/sg-core/0.log" Feb 19 00:25:51 crc kubenswrapper[4825]: I0219 00:25:51.126575 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bbbc889bc-dk5kr_8e4a26b2-2aff-4606-a21e-b8bb948103ca/operator/0.log" Feb 19 00:25:51 crc kubenswrapper[4825]: I0219 00:25:51.393436 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_1a50e579-ea38-4bf0-bfd6-805ef1a6be97/prometheus/0.log" Feb 19 00:25:51 crc kubenswrapper[4825]: I0219 00:25:51.680131 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_701b0694-d308-4569-823a-5848cfa5fea4/elasticsearch/0.log" Feb 19 00:25:51 crc kubenswrapper[4825]: I0219 00:25:51.901466 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-78bcbbdcff-c2z9k_b7420f29-5b9e-4801-9ad8-1ed62185e445/prometheus-webhook-snmp/0.log" Feb 19 00:25:52 crc kubenswrapper[4825]: I0219 00:25:52.123403 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_0c751ba6-cfab-49ff-9243-e332977dfee1/alertmanager/0.log" Feb 19 00:26:05 crc kubenswrapper[4825]: I0219 00:26:05.232427 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-55b89ddfb9-9hhqn_a9c56506-cd68-419c-9a93-a5c23dc0bc86/operator/0.log" Feb 19 00:26:08 crc kubenswrapper[4825]: I0219 00:26:08.537484 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bbbc889bc-dk5kr_8e4a26b2-2aff-4606-a21e-b8bb948103ca/operator/0.log" Feb 19 00:26:08 crc kubenswrapper[4825]: I0219 00:26:08.778556 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_10111f12-b669-4417-a698-2691e21e5c62/qdr/0.log" Feb 19 00:26:22 crc kubenswrapper[4825]: I0219 00:26:22.700868 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-nn542"] Feb 19 00:26:22 crc kubenswrapper[4825]: E0219 00:26:22.702535 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcaeb65a-09dc-4540-a9aa-897ed34b859f" containerName="smoketest-collectd" Feb 19 00:26:22 crc kubenswrapper[4825]: I0219 00:26:22.702563 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcaeb65a-09dc-4540-a9aa-897ed34b859f" containerName="smoketest-collectd" Feb 19 00:26:22 crc kubenswrapper[4825]: E0219 00:26:22.702599 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcaeb65a-09dc-4540-a9aa-897ed34b859f" containerName="smoketest-ceilometer" Feb 19 00:26:22 crc kubenswrapper[4825]: I0219 00:26:22.702610 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcaeb65a-09dc-4540-a9aa-897ed34b859f" containerName="smoketest-ceilometer" Feb 19 00:26:22 crc kubenswrapper[4825]: E0219 00:26:22.702635 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a5900c-a3f9-42cc-857f-a4d18d23a6bc" containerName="curl" Feb 19 00:26:22 crc kubenswrapper[4825]: I0219 00:26:22.702649 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a5900c-a3f9-42cc-857f-a4d18d23a6bc" containerName="curl" Feb 19 00:26:22 crc kubenswrapper[4825]: I0219 00:26:22.702866 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcaeb65a-09dc-4540-a9aa-897ed34b859f" containerName="smoketest-collectd" Feb 19 00:26:22 crc kubenswrapper[4825]: I0219 00:26:22.702888 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="25a5900c-a3f9-42cc-857f-a4d18d23a6bc" containerName="curl" Feb 19 00:26:22 crc kubenswrapper[4825]: I0219 00:26:22.702917 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcaeb65a-09dc-4540-a9aa-897ed34b859f" containerName="smoketest-ceilometer" Feb 19 00:26:22 crc kubenswrapper[4825]: I0219 00:26:22.703722 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-nn542" Feb 19 00:26:22 crc kubenswrapper[4825]: I0219 00:26:22.716097 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-nn542"] Feb 19 00:26:22 crc kubenswrapper[4825]: I0219 00:26:22.816550 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64cz\" (UniqueName: \"kubernetes.io/projected/2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf-kube-api-access-b64cz\") pod \"infrawatch-operators-nn542\" (UID: \"2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf\") " pod="service-telemetry/infrawatch-operators-nn542" Feb 19 00:26:22 crc kubenswrapper[4825]: I0219 00:26:22.918487 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b64cz\" (UniqueName: \"kubernetes.io/projected/2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf-kube-api-access-b64cz\") pod \"infrawatch-operators-nn542\" (UID: \"2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf\") " pod="service-telemetry/infrawatch-operators-nn542" Feb 19 00:26:22 crc kubenswrapper[4825]: I0219 00:26:22.941485 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b64cz\" (UniqueName: \"kubernetes.io/projected/2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf-kube-api-access-b64cz\") pod \"infrawatch-operators-nn542\" (UID: \"2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf\") " pod="service-telemetry/infrawatch-operators-nn542" Feb 19 00:26:23 crc kubenswrapper[4825]: I0219 00:26:23.026498 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-nn542" Feb 19 00:26:23 crc kubenswrapper[4825]: I0219 00:26:23.279285 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-nn542"] Feb 19 00:26:23 crc kubenswrapper[4825]: I0219 00:26:23.526046 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-nn542" event={"ID":"2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf","Type":"ContainerStarted","Data":"1dce8da31e69dfbb8213fa8ce5e29b10b163ffb791e6fcf7e0a8c1bb215f69a8"} Feb 19 00:26:24 crc kubenswrapper[4825]: I0219 00:26:24.535338 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-nn542" event={"ID":"2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf","Type":"ContainerStarted","Data":"bcdd8bf874a8ff0cc10343e47cceca4bbf578cdc8aca2c078e6552886e884bd9"} Feb 19 00:26:24 crc kubenswrapper[4825]: I0219 00:26:24.555418 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-nn542" podStartSLOduration=2.452599144 podStartE2EDuration="2.555369603s" podCreationTimestamp="2026-02-19 00:26:22 +0000 UTC" firstStartedPulling="2026-02-19 00:26:23.289417638 +0000 UTC m=+1128.980383685" lastFinishedPulling="2026-02-19 00:26:23.392188087 +0000 UTC m=+1129.083154144" observedRunningTime="2026-02-19 00:26:24.550698033 +0000 UTC m=+1130.241664090" watchObservedRunningTime="2026-02-19 00:26:24.555369603 +0000 UTC m=+1130.246335660" Feb 19 00:26:33 crc kubenswrapper[4825]: I0219 00:26:33.027178 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-nn542" Feb 19 00:26:33 crc kubenswrapper[4825]: I0219 00:26:33.030229 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-nn542" Feb 19 00:26:33 crc kubenswrapper[4825]: I0219 00:26:33.060250 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-nn542" Feb 19 00:26:33 crc kubenswrapper[4825]: I0219 00:26:33.669325 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-nn542" Feb 19 00:26:33 crc kubenswrapper[4825]: I0219 00:26:33.722232 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-nn542"] Feb 19 00:26:35 crc kubenswrapper[4825]: I0219 00:26:35.641243 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-nn542" podUID="2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf" containerName="registry-server" containerID="cri-o://bcdd8bf874a8ff0cc10343e47cceca4bbf578cdc8aca2c078e6552886e884bd9" gracePeriod=2 Feb 19 00:26:36 crc kubenswrapper[4825]: I0219 00:26:36.031338 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-nn542" Feb 19 00:26:36 crc kubenswrapper[4825]: I0219 00:26:36.194730 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b64cz\" (UniqueName: \"kubernetes.io/projected/2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf-kube-api-access-b64cz\") pod \"2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf\" (UID: \"2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf\") " Feb 19 00:26:36 crc kubenswrapper[4825]: I0219 00:26:36.201798 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf-kube-api-access-b64cz" (OuterVolumeSpecName: "kube-api-access-b64cz") pod "2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf" (UID: "2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf"). InnerVolumeSpecName "kube-api-access-b64cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:26:36 crc kubenswrapper[4825]: I0219 00:26:36.296489 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b64cz\" (UniqueName: \"kubernetes.io/projected/2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf-kube-api-access-b64cz\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:36 crc kubenswrapper[4825]: I0219 00:26:36.648842 4825 generic.go:334] "Generic (PLEG): container finished" podID="2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf" containerID="bcdd8bf874a8ff0cc10343e47cceca4bbf578cdc8aca2c078e6552886e884bd9" exitCode=0 Feb 19 00:26:36 crc kubenswrapper[4825]: I0219 00:26:36.648921 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-nn542" Feb 19 00:26:36 crc kubenswrapper[4825]: I0219 00:26:36.648925 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-nn542" event={"ID":"2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf","Type":"ContainerDied","Data":"bcdd8bf874a8ff0cc10343e47cceca4bbf578cdc8aca2c078e6552886e884bd9"} Feb 19 00:26:36 crc kubenswrapper[4825]: I0219 00:26:36.649056 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-nn542" event={"ID":"2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf","Type":"ContainerDied","Data":"1dce8da31e69dfbb8213fa8ce5e29b10b163ffb791e6fcf7e0a8c1bb215f69a8"} Feb 19 00:26:36 crc kubenswrapper[4825]: I0219 00:26:36.649085 4825 scope.go:117] "RemoveContainer" containerID="bcdd8bf874a8ff0cc10343e47cceca4bbf578cdc8aca2c078e6552886e884bd9" Feb 19 00:26:36 crc kubenswrapper[4825]: I0219 00:26:36.670612 4825 scope.go:117] "RemoveContainer" containerID="bcdd8bf874a8ff0cc10343e47cceca4bbf578cdc8aca2c078e6552886e884bd9" Feb 19 00:26:36 crc kubenswrapper[4825]: E0219 00:26:36.671385 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcdd8bf874a8ff0cc10343e47cceca4bbf578cdc8aca2c078e6552886e884bd9\": container with ID starting with bcdd8bf874a8ff0cc10343e47cceca4bbf578cdc8aca2c078e6552886e884bd9 not found: ID does not exist" containerID="bcdd8bf874a8ff0cc10343e47cceca4bbf578cdc8aca2c078e6552886e884bd9" Feb 19 00:26:36 crc kubenswrapper[4825]: I0219 00:26:36.671430 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcdd8bf874a8ff0cc10343e47cceca4bbf578cdc8aca2c078e6552886e884bd9"} err="failed to get container status \"bcdd8bf874a8ff0cc10343e47cceca4bbf578cdc8aca2c078e6552886e884bd9\": rpc error: code = NotFound desc = could not find container \"bcdd8bf874a8ff0cc10343e47cceca4bbf578cdc8aca2c078e6552886e884bd9\": container with ID starting with bcdd8bf874a8ff0cc10343e47cceca4bbf578cdc8aca2c078e6552886e884bd9 not found: ID does not exist" Feb 19 00:26:36 crc kubenswrapper[4825]: I0219 00:26:36.683203 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-nn542"] Feb 19 00:26:36 crc kubenswrapper[4825]: I0219 00:26:36.691224 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-nn542"] Feb 19 00:26:37 crc kubenswrapper[4825]: I0219 00:26:37.075845 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf" path="/var/lib/kubelet/pods/2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf/volumes" Feb 19 00:26:44 crc kubenswrapper[4825]: I0219 00:26:44.355574 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2wbnv/must-gather-glplg"] Feb 19 00:26:44 crc kubenswrapper[4825]: E0219 00:26:44.356768 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf" containerName="registry-server" Feb 19 00:26:44 crc kubenswrapper[4825]: I0219 00:26:44.356787 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf" containerName="registry-server" Feb 19 00:26:44 crc kubenswrapper[4825]: I0219 00:26:44.356968 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a7e5695-5d07-42ad-81b8-9f50a0b6e9cf" containerName="registry-server" Feb 19 00:26:44 crc kubenswrapper[4825]: I0219 00:26:44.357906 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2wbnv/must-gather-glplg" Feb 19 00:26:44 crc kubenswrapper[4825]: I0219 00:26:44.360637 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2wbnv"/"openshift-service-ca.crt" Feb 19 00:26:44 crc kubenswrapper[4825]: I0219 00:26:44.361575 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-2wbnv"/"default-dockercfg-vrg4m" Feb 19 00:26:44 crc kubenswrapper[4825]: I0219 00:26:44.362724 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2wbnv"/"kube-root-ca.crt" Feb 19 00:26:44 crc kubenswrapper[4825]: I0219 00:26:44.395910 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2wbnv/must-gather-glplg"] Feb 19 00:26:44 crc kubenswrapper[4825]: I0219 00:26:44.552044 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2hmr\" (UniqueName: \"kubernetes.io/projected/19d2c363-bc5d-42cc-9180-5d9c2f09d83c-kube-api-access-g2hmr\") pod \"must-gather-glplg\" (UID: \"19d2c363-bc5d-42cc-9180-5d9c2f09d83c\") " pod="openshift-must-gather-2wbnv/must-gather-glplg" Feb 19 00:26:44 crc kubenswrapper[4825]: I0219 00:26:44.552545 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/19d2c363-bc5d-42cc-9180-5d9c2f09d83c-must-gather-output\") pod \"must-gather-glplg\" (UID: \"19d2c363-bc5d-42cc-9180-5d9c2f09d83c\") " pod="openshift-must-gather-2wbnv/must-gather-glplg" Feb 19 00:26:44 crc kubenswrapper[4825]: I0219 00:26:44.653631 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2hmr\" (UniqueName: \"kubernetes.io/projected/19d2c363-bc5d-42cc-9180-5d9c2f09d83c-kube-api-access-g2hmr\") pod \"must-gather-glplg\" (UID: \"19d2c363-bc5d-42cc-9180-5d9c2f09d83c\") " pod="openshift-must-gather-2wbnv/must-gather-glplg" Feb 19 00:26:44 crc kubenswrapper[4825]: I0219 00:26:44.653697 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/19d2c363-bc5d-42cc-9180-5d9c2f09d83c-must-gather-output\") pod \"must-gather-glplg\" (UID: \"19d2c363-bc5d-42cc-9180-5d9c2f09d83c\") " pod="openshift-must-gather-2wbnv/must-gather-glplg" Feb 19 00:26:44 crc kubenswrapper[4825]: I0219 00:26:44.654218 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/19d2c363-bc5d-42cc-9180-5d9c2f09d83c-must-gather-output\") pod \"must-gather-glplg\" (UID: \"19d2c363-bc5d-42cc-9180-5d9c2f09d83c\") " pod="openshift-must-gather-2wbnv/must-gather-glplg" Feb 19 00:26:44 crc kubenswrapper[4825]: I0219 00:26:44.673015 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2hmr\" (UniqueName: \"kubernetes.io/projected/19d2c363-bc5d-42cc-9180-5d9c2f09d83c-kube-api-access-g2hmr\") pod \"must-gather-glplg\" (UID: \"19d2c363-bc5d-42cc-9180-5d9c2f09d83c\") " pod="openshift-must-gather-2wbnv/must-gather-glplg" Feb 19 00:26:44 crc kubenswrapper[4825]: I0219 00:26:44.682839 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2wbnv/must-gather-glplg" Feb 19 00:26:44 crc kubenswrapper[4825]: I0219 00:26:44.885796 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2wbnv/must-gather-glplg"] Feb 19 00:26:45 crc kubenswrapper[4825]: I0219 00:26:45.745210 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2wbnv/must-gather-glplg" event={"ID":"19d2c363-bc5d-42cc-9180-5d9c2f09d83c","Type":"ContainerStarted","Data":"0163b4fae3eadabc6c5cd47b68c29f9780661b8779991d64cd594a43cfffe605"} Feb 19 00:26:52 crc kubenswrapper[4825]: I0219 00:26:52.815335 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2wbnv/must-gather-glplg" event={"ID":"19d2c363-bc5d-42cc-9180-5d9c2f09d83c","Type":"ContainerStarted","Data":"1b3e18f9e13fdd8c83050f0c30529d88da3778a72b55b36b28803afa11aecebb"} Feb 19 00:26:52 crc kubenswrapper[4825]: I0219 00:26:52.816361 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2wbnv/must-gather-glplg" event={"ID":"19d2c363-bc5d-42cc-9180-5d9c2f09d83c","Type":"ContainerStarted","Data":"5b461346dcc8f0482e38039fe803b9108a5377ce2c7c950880e601c9e28b33a3"} Feb 19 00:26:52 crc kubenswrapper[4825]: I0219 00:26:52.833598 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2wbnv/must-gather-glplg" podStartSLOduration=2.013123472 podStartE2EDuration="8.833578884s" podCreationTimestamp="2026-02-19 00:26:44 +0000 UTC" firstStartedPulling="2026-02-19 00:26:44.890580091 +0000 UTC m=+1150.581546138" lastFinishedPulling="2026-02-19 00:26:51.711035513 +0000 UTC m=+1157.402001550" observedRunningTime="2026-02-19 00:26:52.829074499 +0000 UTC m=+1158.520040556" watchObservedRunningTime="2026-02-19 00:26:52.833578884 +0000 UTC m=+1158.524544931" Feb 19 00:27:28 crc kubenswrapper[4825]: I0219 00:27:28.823991 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:27:28 crc kubenswrapper[4825]: I0219 00:27:28.824888 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:27:34 crc kubenswrapper[4825]: I0219 00:27:34.216844 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qz7nq_78b30adc-193b-4784-aa76-522479b866dc/control-plane-machine-set-operator/0.log" Feb 19 00:27:34 crc kubenswrapper[4825]: I0219 00:27:34.390839 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9p52n_999a793b-4d2e-41bb-bd09-cd8ca31cef0c/kube-rbac-proxy/0.log" Feb 19 00:27:34 crc kubenswrapper[4825]: I0219 00:27:34.413708 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9p52n_999a793b-4d2e-41bb-bd09-cd8ca31cef0c/machine-api-operator/0.log" Feb 19 00:27:47 crc kubenswrapper[4825]: I0219 00:27:47.838866 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-b7hpc_16f3958c-d237-4741-97d0-332277c0e53a/cert-manager-controller/0.log" Feb 19 00:27:47 crc kubenswrapper[4825]: I0219 00:27:47.978094 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-x2dr2_c6835fb4-5c79-47fa-8a6f-a827d96e1363/cert-manager-cainjector/0.log" Feb 19 00:27:48 crc kubenswrapper[4825]: I0219 00:27:48.050888 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-dnfkd_d67e5607-ad75-4cca-9aa6-db360e6334b1/cert-manager-webhook/0.log" Feb 19 00:27:58 crc kubenswrapper[4825]: I0219 00:27:58.824188 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:27:58 crc kubenswrapper[4825]: I0219 00:27:58.824979 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:28:02 crc kubenswrapper[4825]: I0219 00:28:02.289109 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-l52fp_d8b8a9db-14c5-4cbe-8673-3ac90c2b6749/prometheus-operator/0.log" Feb 19 00:28:02 crc kubenswrapper[4825]: I0219 00:28:02.419448 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57879d559c-488k5_5f078aef-685e-4fd4-ab75-d23f8f1cc185/prometheus-operator-admission-webhook/0.log" Feb 19 00:28:02 crc kubenswrapper[4825]: I0219 00:28:02.471498 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57879d559c-kkc6r_6608c2fd-be9c-4cb3-93a6-1dcdf8da8555/prometheus-operator-admission-webhook/0.log" Feb 19 00:28:02 crc kubenswrapper[4825]: I0219 00:28:02.610963 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-g8xlc_e4481c24-6ff2-4e86-8889-8910cb81f08b/operator/0.log" Feb 19 00:28:02 crc kubenswrapper[4825]: I0219 00:28:02.713638 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-v5f9p_af582d80-5f92-4bf1-9d5e-44ade090c8f9/perses-operator/0.log" Feb 19 00:28:16 crc kubenswrapper[4825]: I0219 00:28:16.998816 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87_0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed/util/0.log" Feb 19 00:28:17 crc kubenswrapper[4825]: I0219 00:28:17.187929 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87_0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed/pull/0.log" Feb 19 00:28:17 crc kubenswrapper[4825]: I0219 00:28:17.227488 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87_0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed/util/0.log" Feb 19 00:28:17 crc kubenswrapper[4825]: I0219 00:28:17.230873 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87_0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed/pull/0.log" Feb 19 00:28:17 crc kubenswrapper[4825]: I0219 00:28:17.386601 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87_0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed/util/0.log" Feb 19 00:28:17 crc kubenswrapper[4825]: I0219 00:28:17.411218 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87_0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed/pull/0.log" Feb 19 00:28:17 crc kubenswrapper[4825]: I0219 00:28:17.432342 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1vmf87_0fdb85b1-37d5-4a3d-8334-a83c7a8bf6ed/extract/0.log" Feb 19 00:28:17 crc kubenswrapper[4825]: I0219 00:28:17.605549 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz_c60edbef-45ae-4b90-8ecc-f289c774e0c6/util/0.log" Feb 19 00:28:17 crc kubenswrapper[4825]: I0219 00:28:17.755563 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz_c60edbef-45ae-4b90-8ecc-f289c774e0c6/util/0.log" Feb 19 00:28:17 crc kubenswrapper[4825]: I0219 00:28:17.762891 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz_c60edbef-45ae-4b90-8ecc-f289c774e0c6/pull/0.log" Feb 19 00:28:17 crc kubenswrapper[4825]: I0219 00:28:17.763018 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz_c60edbef-45ae-4b90-8ecc-f289c774e0c6/pull/0.log" Feb 19 00:28:17 crc kubenswrapper[4825]: I0219 00:28:17.951851 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz_c60edbef-45ae-4b90-8ecc-f289c774e0c6/extract/0.log" Feb 19 00:28:17 crc kubenswrapper[4825]: I0219 00:28:17.958294 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz_c60edbef-45ae-4b90-8ecc-f289c774e0c6/util/0.log" Feb 19 00:28:17 crc kubenswrapper[4825]: I0219 00:28:17.961152 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f524jz_c60edbef-45ae-4b90-8ecc-f289c774e0c6/pull/0.log" Feb 19 00:28:18 crc kubenswrapper[4825]: I0219 00:28:18.157601 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6_2acfe766-3f1e-4dad-a8c8-cbb88c3314ec/util/0.log" Feb 19 00:28:18 crc kubenswrapper[4825]: I0219 00:28:18.300807 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6_2acfe766-3f1e-4dad-a8c8-cbb88c3314ec/util/0.log" Feb 19 00:28:18 crc kubenswrapper[4825]: I0219 00:28:18.323471 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6_2acfe766-3f1e-4dad-a8c8-cbb88c3314ec/pull/0.log" Feb 19 00:28:18 crc kubenswrapper[4825]: I0219 00:28:18.400075 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6_2acfe766-3f1e-4dad-a8c8-cbb88c3314ec/pull/0.log" Feb 19 00:28:18 crc kubenswrapper[4825]: I0219 00:28:18.547688 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6_2acfe766-3f1e-4dad-a8c8-cbb88c3314ec/extract/0.log" Feb 19 00:28:18 crc kubenswrapper[4825]: I0219 00:28:18.549273 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6_2acfe766-3f1e-4dad-a8c8-cbb88c3314ec/util/0.log" Feb 19 00:28:18 crc kubenswrapper[4825]: I0219 00:28:18.563080 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5q49z6_2acfe766-3f1e-4dad-a8c8-cbb88c3314ec/pull/0.log" Feb 19 00:28:18 crc kubenswrapper[4825]: I0219 00:28:18.743661 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv_c7b48e68-4fea-48b5-b0c8-408af47180f5/util/0.log" Feb 19 00:28:18 crc kubenswrapper[4825]: I0219 00:28:18.924172 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv_c7b48e68-4fea-48b5-b0c8-408af47180f5/pull/0.log" Feb 19 00:28:18 crc kubenswrapper[4825]: I0219 00:28:18.930349 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv_c7b48e68-4fea-48b5-b0c8-408af47180f5/pull/0.log" Feb 19 00:28:18 crc kubenswrapper[4825]: I0219 00:28:18.953922 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv_c7b48e68-4fea-48b5-b0c8-408af47180f5/util/0.log" Feb 19 00:28:19 crc kubenswrapper[4825]: I0219 00:28:19.139550 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv_c7b48e68-4fea-48b5-b0c8-408af47180f5/pull/0.log" Feb 19 00:28:19 crc kubenswrapper[4825]: I0219 00:28:19.156547 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv_c7b48e68-4fea-48b5-b0c8-408af47180f5/util/0.log" Feb 19 00:28:19 crc kubenswrapper[4825]: I0219 00:28:19.175478 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t5tpv_c7b48e68-4fea-48b5-b0c8-408af47180f5/extract/0.log" Feb 19 00:28:19 crc kubenswrapper[4825]: I0219 00:28:19.319885 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jp9cc_9ec19de0-0a04-435c-b93d-ed4231a4cce4/extract-utilities/0.log" Feb 19 00:28:19 crc kubenswrapper[4825]: I0219 00:28:19.521469 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jp9cc_9ec19de0-0a04-435c-b93d-ed4231a4cce4/extract-content/0.log" Feb 19 00:28:19 crc kubenswrapper[4825]: I0219 00:28:19.523006 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jp9cc_9ec19de0-0a04-435c-b93d-ed4231a4cce4/extract-utilities/0.log" Feb 19 00:28:19 crc kubenswrapper[4825]: I0219 00:28:19.583814 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jp9cc_9ec19de0-0a04-435c-b93d-ed4231a4cce4/extract-content/0.log" Feb 19 00:28:19 crc kubenswrapper[4825]: I0219 00:28:19.728799 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jp9cc_9ec19de0-0a04-435c-b93d-ed4231a4cce4/extract-content/0.log" Feb 19 00:28:19 crc kubenswrapper[4825]: I0219 00:28:19.751356 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jp9cc_9ec19de0-0a04-435c-b93d-ed4231a4cce4/extract-utilities/0.log" Feb 19 00:28:19 crc kubenswrapper[4825]: I0219 00:28:19.943294 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jp9cc_9ec19de0-0a04-435c-b93d-ed4231a4cce4/registry-server/0.log" Feb 19 00:28:19 crc kubenswrapper[4825]: I0219 00:28:19.965031 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d5t5x_a8961fc7-6b71-4345-9d73-db95ac0b0627/extract-utilities/0.log" Feb 19 00:28:20 crc kubenswrapper[4825]: I0219 00:28:20.157558 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d5t5x_a8961fc7-6b71-4345-9d73-db95ac0b0627/extract-utilities/0.log" Feb 19 00:28:20 crc kubenswrapper[4825]: I0219 00:28:20.172313 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d5t5x_a8961fc7-6b71-4345-9d73-db95ac0b0627/extract-content/0.log" Feb 19 00:28:20 crc kubenswrapper[4825]: I0219 00:28:20.183185 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d5t5x_a8961fc7-6b71-4345-9d73-db95ac0b0627/extract-content/0.log" Feb 19 00:28:20 crc kubenswrapper[4825]: I0219 00:28:20.350248 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d5t5x_a8961fc7-6b71-4345-9d73-db95ac0b0627/extract-utilities/0.log" Feb 19 00:28:20 crc kubenswrapper[4825]: I0219 00:28:20.379702 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d5t5x_a8961fc7-6b71-4345-9d73-db95ac0b0627/extract-content/0.log" Feb 19 00:28:20 crc kubenswrapper[4825]: I0219 00:28:20.565244 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d5t5x_a8961fc7-6b71-4345-9d73-db95ac0b0627/registry-server/0.log" Feb 19 00:28:20 crc kubenswrapper[4825]: I0219 00:28:20.610334 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f5njj_d005d56b-57dc-4399-899f-3e4945f8d94d/marketplace-operator/0.log" Feb 19 00:28:20 crc kubenswrapper[4825]: I0219 00:28:20.665228 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-djjk4_bfbec1eb-c15f-48e8-bb4e-9e876d55a511/extract-utilities/0.log" Feb 19 00:28:20 crc kubenswrapper[4825]: I0219 00:28:20.853716 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-djjk4_bfbec1eb-c15f-48e8-bb4e-9e876d55a511/extract-content/0.log" Feb 19 00:28:20 crc kubenswrapper[4825]: I0219 00:28:20.863824 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-djjk4_bfbec1eb-c15f-48e8-bb4e-9e876d55a511/extract-content/0.log" Feb 19 00:28:20 crc kubenswrapper[4825]: I0219 00:28:20.874076 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-djjk4_bfbec1eb-c15f-48e8-bb4e-9e876d55a511/extract-utilities/0.log" Feb 19 00:28:21 crc kubenswrapper[4825]: I0219 00:28:21.046296 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-djjk4_bfbec1eb-c15f-48e8-bb4e-9e876d55a511/extract-utilities/0.log" Feb 19 00:28:21 crc kubenswrapper[4825]: I0219 00:28:21.062370 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-djjk4_bfbec1eb-c15f-48e8-bb4e-9e876d55a511/extract-content/0.log" Feb 19 00:28:21 crc kubenswrapper[4825]: I0219 00:28:21.301709 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-djjk4_bfbec1eb-c15f-48e8-bb4e-9e876d55a511/registry-server/0.log" Feb 19 00:28:28 crc kubenswrapper[4825]: I0219 00:28:28.823216 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:28:28 crc kubenswrapper[4825]: I0219 00:28:28.823737 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:28:28 crc kubenswrapper[4825]: I0219 00:28:28.823799 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:28:28 crc kubenswrapper[4825]: I0219 00:28:28.824651 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ccd80065e19d2d5ef4a9f01395c7a194c23ca4e0993e524d1c929084ce7402c2"} pod="openshift-machine-config-operator/machine-config-daemon-tggq9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:28:28 crc kubenswrapper[4825]: I0219 00:28:28.824879 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" containerID="cri-o://ccd80065e19d2d5ef4a9f01395c7a194c23ca4e0993e524d1c929084ce7402c2" gracePeriod=600 Feb 19 00:28:29 crc kubenswrapper[4825]: I0219 00:28:29.591607 4825 generic.go:334] "Generic (PLEG): container finished" podID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerID="ccd80065e19d2d5ef4a9f01395c7a194c23ca4e0993e524d1c929084ce7402c2" exitCode=0 Feb 19 00:28:29 crc kubenswrapper[4825]: I0219 00:28:29.591685 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerDied","Data":"ccd80065e19d2d5ef4a9f01395c7a194c23ca4e0993e524d1c929084ce7402c2"} Feb 19 00:28:29 crc kubenswrapper[4825]: I0219 00:28:29.592390 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerStarted","Data":"1b2c8c4b228cfa082d88ac4af728c384853cdf37092b0fdf511a3b038234a33b"} Feb 19 00:28:29 crc kubenswrapper[4825]: I0219 00:28:29.592421 4825 scope.go:117] "RemoveContainer" containerID="f27439f26e1215161e345d1607b94ea7543caa15aa262252676acb2360916f66" Feb 19 00:28:33 crc kubenswrapper[4825]: I0219 00:28:33.254164 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57879d559c-kkc6r_6608c2fd-be9c-4cb3-93a6-1dcdf8da8555/prometheus-operator-admission-webhook/0.log" Feb 19 00:28:33 crc kubenswrapper[4825]: I0219 00:28:33.270532 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-l52fp_d8b8a9db-14c5-4cbe-8673-3ac90c2b6749/prometheus-operator/0.log" Feb 19 00:28:33 crc kubenswrapper[4825]: I0219 00:28:33.314709 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57879d559c-488k5_5f078aef-685e-4fd4-ab75-d23f8f1cc185/prometheus-operator-admission-webhook/0.log" Feb 19 00:28:33 crc kubenswrapper[4825]: I0219 00:28:33.397321 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-g8xlc_e4481c24-6ff2-4e86-8889-8910cb81f08b/operator/0.log" Feb 19 00:28:33 crc kubenswrapper[4825]: I0219 00:28:33.463842 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-v5f9p_af582d80-5f92-4bf1-9d5e-44ade090c8f9/perses-operator/0.log" Feb 19 00:29:28 crc kubenswrapper[4825]: I0219 00:29:28.085695 4825 generic.go:334] "Generic (PLEG): container finished" podID="19d2c363-bc5d-42cc-9180-5d9c2f09d83c" containerID="5b461346dcc8f0482e38039fe803b9108a5377ce2c7c950880e601c9e28b33a3" exitCode=0 Feb 19 00:29:28 crc kubenswrapper[4825]: I0219 00:29:28.085777 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2wbnv/must-gather-glplg" event={"ID":"19d2c363-bc5d-42cc-9180-5d9c2f09d83c","Type":"ContainerDied","Data":"5b461346dcc8f0482e38039fe803b9108a5377ce2c7c950880e601c9e28b33a3"} Feb 19 00:29:28 crc kubenswrapper[4825]: I0219 00:29:28.086937 4825 scope.go:117] "RemoveContainer" containerID="5b461346dcc8f0482e38039fe803b9108a5377ce2c7c950880e601c9e28b33a3" Feb 19 00:29:28 crc kubenswrapper[4825]: I0219 00:29:28.621008 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2wbnv_must-gather-glplg_19d2c363-bc5d-42cc-9180-5d9c2f09d83c/gather/0.log" Feb 19 00:29:35 crc kubenswrapper[4825]: I0219 00:29:35.012644 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2wbnv/must-gather-glplg"] Feb 19 00:29:35 crc kubenswrapper[4825]: I0219 00:29:35.017680 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2wbnv/must-gather-glplg" podUID="19d2c363-bc5d-42cc-9180-5d9c2f09d83c" containerName="copy" containerID="cri-o://1b3e18f9e13fdd8c83050f0c30529d88da3778a72b55b36b28803afa11aecebb" gracePeriod=2 Feb 19 00:29:35 crc kubenswrapper[4825]: I0219 00:29:35.021281 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2wbnv/must-gather-glplg"] Feb 19 00:29:35 crc kubenswrapper[4825]: I0219 00:29:35.142106 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2wbnv_must-gather-glplg_19d2c363-bc5d-42cc-9180-5d9c2f09d83c/copy/0.log" Feb 19 00:29:35 crc kubenswrapper[4825]: I0219 00:29:35.145339 4825 generic.go:334] "Generic (PLEG): container finished" podID="19d2c363-bc5d-42cc-9180-5d9c2f09d83c" containerID="1b3e18f9e13fdd8c83050f0c30529d88da3778a72b55b36b28803afa11aecebb" exitCode=143 Feb 19 00:29:35 crc kubenswrapper[4825]: I0219 00:29:35.434813 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2wbnv_must-gather-glplg_19d2c363-bc5d-42cc-9180-5d9c2f09d83c/copy/0.log" Feb 19 00:29:35 crc kubenswrapper[4825]: I0219 00:29:35.435261 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2wbnv/must-gather-glplg" Feb 19 00:29:35 crc kubenswrapper[4825]: I0219 00:29:35.538650 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2hmr\" (UniqueName: \"kubernetes.io/projected/19d2c363-bc5d-42cc-9180-5d9c2f09d83c-kube-api-access-g2hmr\") pod \"19d2c363-bc5d-42cc-9180-5d9c2f09d83c\" (UID: \"19d2c363-bc5d-42cc-9180-5d9c2f09d83c\") " Feb 19 00:29:35 crc kubenswrapper[4825]: I0219 00:29:35.538906 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/19d2c363-bc5d-42cc-9180-5d9c2f09d83c-must-gather-output\") pod \"19d2c363-bc5d-42cc-9180-5d9c2f09d83c\" (UID: \"19d2c363-bc5d-42cc-9180-5d9c2f09d83c\") " Feb 19 00:29:35 crc kubenswrapper[4825]: I0219 00:29:35.547074 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19d2c363-bc5d-42cc-9180-5d9c2f09d83c-kube-api-access-g2hmr" (OuterVolumeSpecName: "kube-api-access-g2hmr") pod "19d2c363-bc5d-42cc-9180-5d9c2f09d83c" (UID: "19d2c363-bc5d-42cc-9180-5d9c2f09d83c"). InnerVolumeSpecName "kube-api-access-g2hmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:29:35 crc kubenswrapper[4825]: I0219 00:29:35.608931 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19d2c363-bc5d-42cc-9180-5d9c2f09d83c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "19d2c363-bc5d-42cc-9180-5d9c2f09d83c" (UID: "19d2c363-bc5d-42cc-9180-5d9c2f09d83c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:29:35 crc kubenswrapper[4825]: I0219 00:29:35.640730 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2hmr\" (UniqueName: \"kubernetes.io/projected/19d2c363-bc5d-42cc-9180-5d9c2f09d83c-kube-api-access-g2hmr\") on node \"crc\" DevicePath \"\"" Feb 19 00:29:35 crc kubenswrapper[4825]: I0219 00:29:35.641127 4825 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/19d2c363-bc5d-42cc-9180-5d9c2f09d83c-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 00:29:36 crc kubenswrapper[4825]: I0219 00:29:36.153375 4825 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2wbnv_must-gather-glplg_19d2c363-bc5d-42cc-9180-5d9c2f09d83c/copy/0.log" Feb 19 00:29:36 crc kubenswrapper[4825]: I0219 00:29:36.154116 4825 scope.go:117] "RemoveContainer" containerID="1b3e18f9e13fdd8c83050f0c30529d88da3778a72b55b36b28803afa11aecebb" Feb 19 00:29:36 crc kubenswrapper[4825]: I0219 00:29:36.154410 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2wbnv/must-gather-glplg" Feb 19 00:29:36 crc kubenswrapper[4825]: I0219 00:29:36.180004 4825 scope.go:117] "RemoveContainer" containerID="5b461346dcc8f0482e38039fe803b9108a5377ce2c7c950880e601c9e28b33a3" Feb 19 00:29:37 crc kubenswrapper[4825]: I0219 00:29:37.077070 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19d2c363-bc5d-42cc-9180-5d9c2f09d83c" path="/var/lib/kubelet/pods/19d2c363-bc5d-42cc-9180-5d9c2f09d83c/volumes" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.147408 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw"] Feb 19 00:30:00 crc kubenswrapper[4825]: E0219 00:30:00.148589 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19d2c363-bc5d-42cc-9180-5d9c2f09d83c" containerName="gather" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.148608 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="19d2c363-bc5d-42cc-9180-5d9c2f09d83c" containerName="gather" Feb 19 00:30:00 crc kubenswrapper[4825]: E0219 00:30:00.148646 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19d2c363-bc5d-42cc-9180-5d9c2f09d83c" containerName="copy" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.148655 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="19d2c363-bc5d-42cc-9180-5d9c2f09d83c" containerName="copy" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.148838 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="19d2c363-bc5d-42cc-9180-5d9c2f09d83c" containerName="copy" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.148856 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="19d2c363-bc5d-42cc-9180-5d9c2f09d83c" containerName="gather" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.149546 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.151617 4825 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.160640 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw"] Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.189283 4825 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.285695 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf7xn\" (UniqueName: \"kubernetes.io/projected/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-kube-api-access-jf7xn\") pod \"collect-profiles-29524350-lqgvw\" (UID: \"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.285778 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-secret-volume\") pod \"collect-profiles-29524350-lqgvw\" (UID: \"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.285831 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-config-volume\") pod \"collect-profiles-29524350-lqgvw\" (UID: \"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.387133 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-config-volume\") pod \"collect-profiles-29524350-lqgvw\" (UID: \"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.387643 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf7xn\" (UniqueName: \"kubernetes.io/projected/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-kube-api-access-jf7xn\") pod \"collect-profiles-29524350-lqgvw\" (UID: \"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.387713 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-secret-volume\") pod \"collect-profiles-29524350-lqgvw\" (UID: \"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.390039 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-config-volume\") pod \"collect-profiles-29524350-lqgvw\" (UID: \"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.394744 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-secret-volume\") pod \"collect-profiles-29524350-lqgvw\" (UID: \"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.410317 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf7xn\" (UniqueName: \"kubernetes.io/projected/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-kube-api-access-jf7xn\") pod \"collect-profiles-29524350-lqgvw\" (UID: \"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.505835 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" Feb 19 00:30:00 crc kubenswrapper[4825]: I0219 00:30:00.938486 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw"] Feb 19 00:30:01 crc kubenswrapper[4825]: I0219 00:30:01.377454 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" event={"ID":"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832","Type":"ContainerStarted","Data":"f35e20a7535e00743b3b8f64fab2f41737d7869aea4eaf2687a4bda9c87c4de8"} Feb 19 00:30:01 crc kubenswrapper[4825]: I0219 00:30:01.377543 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" event={"ID":"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832","Type":"ContainerStarted","Data":"ea29e18a01d58856e93dfc76cde4f0be9eb65ea10476ed6df58f3934ac3e7d20"} Feb 19 00:30:01 crc kubenswrapper[4825]: I0219 00:30:01.394131 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" podStartSLOduration=1.394107483 podStartE2EDuration="1.394107483s" podCreationTimestamp="2026-02-19 00:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:30:01.39058261 +0000 UTC m=+1347.081548667" watchObservedRunningTime="2026-02-19 00:30:01.394107483 +0000 UTC m=+1347.085073530" Feb 19 00:30:02 crc kubenswrapper[4825]: I0219 00:30:02.385631 4825 generic.go:334] "Generic (PLEG): container finished" podID="afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832" containerID="f35e20a7535e00743b3b8f64fab2f41737d7869aea4eaf2687a4bda9c87c4de8" exitCode=0 Feb 19 00:30:02 crc kubenswrapper[4825]: I0219 00:30:02.385693 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" event={"ID":"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832","Type":"ContainerDied","Data":"f35e20a7535e00743b3b8f64fab2f41737d7869aea4eaf2687a4bda9c87c4de8"} Feb 19 00:30:03 crc kubenswrapper[4825]: I0219 00:30:03.647042 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" Feb 19 00:30:03 crc kubenswrapper[4825]: I0219 00:30:03.669646 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf7xn\" (UniqueName: \"kubernetes.io/projected/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-kube-api-access-jf7xn\") pod \"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832\" (UID: \"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832\") " Feb 19 00:30:03 crc kubenswrapper[4825]: I0219 00:30:03.669824 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-config-volume\") pod \"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832\" (UID: \"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832\") " Feb 19 00:30:03 crc kubenswrapper[4825]: I0219 00:30:03.669906 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-secret-volume\") pod \"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832\" (UID: \"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832\") " Feb 19 00:30:03 crc kubenswrapper[4825]: I0219 00:30:03.671915 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-config-volume" (OuterVolumeSpecName: "config-volume") pod "afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832" (UID: "afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:30:03 crc kubenswrapper[4825]: I0219 00:30:03.678888 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-kube-api-access-jf7xn" (OuterVolumeSpecName: "kube-api-access-jf7xn") pod "afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832" (UID: "afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832"). InnerVolumeSpecName "kube-api-access-jf7xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:30:03 crc kubenswrapper[4825]: I0219 00:30:03.679144 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832" (UID: "afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:30:03 crc kubenswrapper[4825]: I0219 00:30:03.771978 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf7xn\" (UniqueName: \"kubernetes.io/projected/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-kube-api-access-jf7xn\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:03 crc kubenswrapper[4825]: I0219 00:30:03.772431 4825 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:03 crc kubenswrapper[4825]: I0219 00:30:03.772441 4825 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:04 crc kubenswrapper[4825]: I0219 00:30:04.413840 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" event={"ID":"afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832","Type":"ContainerDied","Data":"ea29e18a01d58856e93dfc76cde4f0be9eb65ea10476ed6df58f3934ac3e7d20"} Feb 19 00:30:04 crc kubenswrapper[4825]: I0219 00:30:04.413894 4825 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea29e18a01d58856e93dfc76cde4f0be9eb65ea10476ed6df58f3934ac3e7d20" Feb 19 00:30:04 crc kubenswrapper[4825]: I0219 00:30:04.413930 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-lqgvw" Feb 19 00:30:58 crc kubenswrapper[4825]: I0219 00:30:58.823526 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:30:58 crc kubenswrapper[4825]: I0219 00:30:58.824413 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:31:28 crc kubenswrapper[4825]: I0219 00:31:28.823107 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:31:28 crc kubenswrapper[4825]: I0219 00:31:28.823806 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:31:52 crc kubenswrapper[4825]: I0219 00:31:52.654754 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-dcps5"] Feb 19 00:31:52 crc kubenswrapper[4825]: E0219 00:31:52.655859 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832" containerName="collect-profiles" Feb 19 00:31:52 crc kubenswrapper[4825]: I0219 00:31:52.655874 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832" containerName="collect-profiles" Feb 19 00:31:52 crc kubenswrapper[4825]: I0219 00:31:52.656039 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="afc96c7a-f775-4ca1-8a5b-a4f3e0bf1832" containerName="collect-profiles" Feb 19 00:31:52 crc kubenswrapper[4825]: I0219 00:31:52.656623 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-dcps5" Feb 19 00:31:52 crc kubenswrapper[4825]: I0219 00:31:52.673869 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-dcps5"] Feb 19 00:31:52 crc kubenswrapper[4825]: I0219 00:31:52.711427 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjpkp\" (UniqueName: \"kubernetes.io/projected/69d9e417-a568-4621-9c91-122552df9a77-kube-api-access-fjpkp\") pod \"infrawatch-operators-dcps5\" (UID: \"69d9e417-a568-4621-9c91-122552df9a77\") " pod="service-telemetry/infrawatch-operators-dcps5" Feb 19 00:31:52 crc kubenswrapper[4825]: I0219 00:31:52.813878 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjpkp\" (UniqueName: \"kubernetes.io/projected/69d9e417-a568-4621-9c91-122552df9a77-kube-api-access-fjpkp\") pod \"infrawatch-operators-dcps5\" (UID: \"69d9e417-a568-4621-9c91-122552df9a77\") " pod="service-telemetry/infrawatch-operators-dcps5" Feb 19 00:31:52 crc kubenswrapper[4825]: I0219 00:31:52.836450 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjpkp\" (UniqueName: \"kubernetes.io/projected/69d9e417-a568-4621-9c91-122552df9a77-kube-api-access-fjpkp\") pod \"infrawatch-operators-dcps5\" (UID: \"69d9e417-a568-4621-9c91-122552df9a77\") " pod="service-telemetry/infrawatch-operators-dcps5" Feb 19 00:31:52 crc kubenswrapper[4825]: I0219 00:31:52.976405 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-dcps5" Feb 19 00:31:53 crc kubenswrapper[4825]: I0219 00:31:53.388015 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-dcps5"] Feb 19 00:31:53 crc kubenswrapper[4825]: I0219 00:31:53.397924 4825 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:31:54 crc kubenswrapper[4825]: I0219 00:31:54.279670 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-dcps5" event={"ID":"69d9e417-a568-4621-9c91-122552df9a77","Type":"ContainerStarted","Data":"6b13ced79f3ab6b9deb6d67f5c1a869adf8710ad1914161848e13defbae54e66"} Feb 19 00:31:54 crc kubenswrapper[4825]: I0219 00:31:54.279716 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-dcps5" event={"ID":"69d9e417-a568-4621-9c91-122552df9a77","Type":"ContainerStarted","Data":"e9cb1a46368314833b76b4f522d27f3e6fd7afc8ca7c08db6a144a32da981a83"} Feb 19 00:31:54 crc kubenswrapper[4825]: I0219 00:31:54.295943 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-dcps5" podStartSLOduration=2.192902782 podStartE2EDuration="2.295919113s" podCreationTimestamp="2026-02-19 00:31:52 +0000 UTC" firstStartedPulling="2026-02-19 00:31:53.39760373 +0000 UTC m=+1459.088569777" lastFinishedPulling="2026-02-19 00:31:53.500620061 +0000 UTC m=+1459.191586108" observedRunningTime="2026-02-19 00:31:54.294275848 +0000 UTC m=+1459.985241895" watchObservedRunningTime="2026-02-19 00:31:54.295919113 +0000 UTC m=+1459.986885160" Feb 19 00:31:58 crc kubenswrapper[4825]: I0219 00:31:58.823246 4825 patch_prober.go:28] interesting pod/machine-config-daemon-tggq9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:31:58 crc kubenswrapper[4825]: I0219 00:31:58.824030 4825 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:31:58 crc kubenswrapper[4825]: I0219 00:31:58.824085 4825 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" Feb 19 00:31:58 crc kubenswrapper[4825]: I0219 00:31:58.824751 4825 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b2c8c4b228cfa082d88ac4af728c384853cdf37092b0fdf511a3b038234a33b"} pod="openshift-machine-config-operator/machine-config-daemon-tggq9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:31:58 crc kubenswrapper[4825]: I0219 00:31:58.824830 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" podUID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerName="machine-config-daemon" containerID="cri-o://1b2c8c4b228cfa082d88ac4af728c384853cdf37092b0fdf511a3b038234a33b" gracePeriod=600 Feb 19 00:31:59 crc kubenswrapper[4825]: I0219 00:31:59.315744 4825 generic.go:334] "Generic (PLEG): container finished" podID="bd6d1b9a-0fd9-43be-9ed5-7430e830b94f" containerID="1b2c8c4b228cfa082d88ac4af728c384853cdf37092b0fdf511a3b038234a33b" exitCode=0 Feb 19 00:31:59 crc kubenswrapper[4825]: I0219 00:31:59.315821 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerDied","Data":"1b2c8c4b228cfa082d88ac4af728c384853cdf37092b0fdf511a3b038234a33b"} Feb 19 00:31:59 crc kubenswrapper[4825]: I0219 00:31:59.316264 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-tggq9" event={"ID":"bd6d1b9a-0fd9-43be-9ed5-7430e830b94f","Type":"ContainerStarted","Data":"f3388a4fbababad83d8dca56b5f16a9f98d626301f61601846abaa79088a6d77"} Feb 19 00:31:59 crc kubenswrapper[4825]: I0219 00:31:59.316287 4825 scope.go:117] "RemoveContainer" containerID="ccd80065e19d2d5ef4a9f01395c7a194c23ca4e0993e524d1c929084ce7402c2" Feb 19 00:32:02 crc kubenswrapper[4825]: I0219 00:32:02.977375 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-dcps5" Feb 19 00:32:02 crc kubenswrapper[4825]: I0219 00:32:02.979717 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-dcps5" Feb 19 00:32:03 crc kubenswrapper[4825]: I0219 00:32:03.013132 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-dcps5" Feb 19 00:32:03 crc kubenswrapper[4825]: I0219 00:32:03.375967 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-dcps5" Feb 19 00:32:03 crc kubenswrapper[4825]: I0219 00:32:03.419232 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-dcps5"] Feb 19 00:32:05 crc kubenswrapper[4825]: I0219 00:32:05.366033 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-dcps5" podUID="69d9e417-a568-4621-9c91-122552df9a77" containerName="registry-server" containerID="cri-o://6b13ced79f3ab6b9deb6d67f5c1a869adf8710ad1914161848e13defbae54e66" gracePeriod=2 Feb 19 00:32:05 crc kubenswrapper[4825]: I0219 00:32:05.691068 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cxx4b"] Feb 19 00:32:05 crc kubenswrapper[4825]: I0219 00:32:05.694351 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:05 crc kubenswrapper[4825]: I0219 00:32:05.702676 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cxx4b"] Feb 19 00:32:05 crc kubenswrapper[4825]: I0219 00:32:05.812880 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-utilities\") pod \"redhat-operators-cxx4b\" (UID: \"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e\") " pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:05 crc kubenswrapper[4825]: I0219 00:32:05.812987 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsdb4\" (UniqueName: \"kubernetes.io/projected/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-kube-api-access-vsdb4\") pod \"redhat-operators-cxx4b\" (UID: \"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e\") " pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:05 crc kubenswrapper[4825]: I0219 00:32:05.813172 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-catalog-content\") pod \"redhat-operators-cxx4b\" (UID: \"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e\") " pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:05 crc kubenswrapper[4825]: I0219 00:32:05.847988 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-dcps5" Feb 19 00:32:05 crc kubenswrapper[4825]: I0219 00:32:05.914637 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjpkp\" (UniqueName: \"kubernetes.io/projected/69d9e417-a568-4621-9c91-122552df9a77-kube-api-access-fjpkp\") pod \"69d9e417-a568-4621-9c91-122552df9a77\" (UID: \"69d9e417-a568-4621-9c91-122552df9a77\") " Feb 19 00:32:05 crc kubenswrapper[4825]: I0219 00:32:05.915300 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-catalog-content\") pod \"redhat-operators-cxx4b\" (UID: \"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e\") " pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:05 crc kubenswrapper[4825]: I0219 00:32:05.915389 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-utilities\") pod \"redhat-operators-cxx4b\" (UID: \"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e\") " pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:05 crc kubenswrapper[4825]: I0219 00:32:05.915937 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-catalog-content\") pod \"redhat-operators-cxx4b\" (UID: \"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e\") " pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:05 crc kubenswrapper[4825]: I0219 00:32:05.916021 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsdb4\" (UniqueName: \"kubernetes.io/projected/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-kube-api-access-vsdb4\") pod \"redhat-operators-cxx4b\" (UID: \"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e\") " pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:05 crc kubenswrapper[4825]: I0219 00:32:05.916350 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-utilities\") pod \"redhat-operators-cxx4b\" (UID: \"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e\") " pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:05 crc kubenswrapper[4825]: I0219 00:32:05.926214 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69d9e417-a568-4621-9c91-122552df9a77-kube-api-access-fjpkp" (OuterVolumeSpecName: "kube-api-access-fjpkp") pod "69d9e417-a568-4621-9c91-122552df9a77" (UID: "69d9e417-a568-4621-9c91-122552df9a77"). InnerVolumeSpecName "kube-api-access-fjpkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:32:05 crc kubenswrapper[4825]: I0219 00:32:05.939439 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsdb4\" (UniqueName: \"kubernetes.io/projected/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-kube-api-access-vsdb4\") pod \"redhat-operators-cxx4b\" (UID: \"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e\") " pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:06 crc kubenswrapper[4825]: I0219 00:32:06.018336 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjpkp\" (UniqueName: \"kubernetes.io/projected/69d9e417-a568-4621-9c91-122552df9a77-kube-api-access-fjpkp\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:06 crc kubenswrapper[4825]: I0219 00:32:06.031735 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:06 crc kubenswrapper[4825]: I0219 00:32:06.248797 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cxx4b"] Feb 19 00:32:06 crc kubenswrapper[4825]: W0219 00:32:06.255492 4825 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9f898f5_69ab_4e0d_8ec0_6781afb14d6e.slice/crio-1e5a75280486cb49da528e9077f7bd4939d9c3582161b2e3aa998d969783073a WatchSource:0}: Error finding container 1e5a75280486cb49da528e9077f7bd4939d9c3582161b2e3aa998d969783073a: Status 404 returned error can't find the container with id 1e5a75280486cb49da528e9077f7bd4939d9c3582161b2e3aa998d969783073a Feb 19 00:32:06 crc kubenswrapper[4825]: I0219 00:32:06.373011 4825 generic.go:334] "Generic (PLEG): container finished" podID="69d9e417-a568-4621-9c91-122552df9a77" containerID="6b13ced79f3ab6b9deb6d67f5c1a869adf8710ad1914161848e13defbae54e66" exitCode=0 Feb 19 00:32:06 crc kubenswrapper[4825]: I0219 00:32:06.373164 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-dcps5" Feb 19 00:32:06 crc kubenswrapper[4825]: I0219 00:32:06.373151 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-dcps5" event={"ID":"69d9e417-a568-4621-9c91-122552df9a77","Type":"ContainerDied","Data":"6b13ced79f3ab6b9deb6d67f5c1a869adf8710ad1914161848e13defbae54e66"} Feb 19 00:32:06 crc kubenswrapper[4825]: I0219 00:32:06.373676 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-dcps5" event={"ID":"69d9e417-a568-4621-9c91-122552df9a77","Type":"ContainerDied","Data":"e9cb1a46368314833b76b4f522d27f3e6fd7afc8ca7c08db6a144a32da981a83"} Feb 19 00:32:06 crc kubenswrapper[4825]: I0219 00:32:06.373708 4825 scope.go:117] "RemoveContainer" containerID="6b13ced79f3ab6b9deb6d67f5c1a869adf8710ad1914161848e13defbae54e66" Feb 19 00:32:06 crc kubenswrapper[4825]: I0219 00:32:06.381815 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxx4b" event={"ID":"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e","Type":"ContainerStarted","Data":"1e5a75280486cb49da528e9077f7bd4939d9c3582161b2e3aa998d969783073a"} Feb 19 00:32:06 crc kubenswrapper[4825]: I0219 00:32:06.403980 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-dcps5"] Feb 19 00:32:06 crc kubenswrapper[4825]: I0219 00:32:06.406087 4825 scope.go:117] "RemoveContainer" containerID="6b13ced79f3ab6b9deb6d67f5c1a869adf8710ad1914161848e13defbae54e66" Feb 19 00:32:06 crc kubenswrapper[4825]: E0219 00:32:06.406698 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b13ced79f3ab6b9deb6d67f5c1a869adf8710ad1914161848e13defbae54e66\": container with ID starting with 6b13ced79f3ab6b9deb6d67f5c1a869adf8710ad1914161848e13defbae54e66 not found: ID does not exist" containerID="6b13ced79f3ab6b9deb6d67f5c1a869adf8710ad1914161848e13defbae54e66" Feb 19 00:32:06 crc kubenswrapper[4825]: I0219 00:32:06.406763 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b13ced79f3ab6b9deb6d67f5c1a869adf8710ad1914161848e13defbae54e66"} err="failed to get container status \"6b13ced79f3ab6b9deb6d67f5c1a869adf8710ad1914161848e13defbae54e66\": rpc error: code = NotFound desc = could not find container \"6b13ced79f3ab6b9deb6d67f5c1a869adf8710ad1914161848e13defbae54e66\": container with ID starting with 6b13ced79f3ab6b9deb6d67f5c1a869adf8710ad1914161848e13defbae54e66 not found: ID does not exist" Feb 19 00:32:06 crc kubenswrapper[4825]: I0219 00:32:06.409161 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-dcps5"] Feb 19 00:32:07 crc kubenswrapper[4825]: I0219 00:32:07.074772 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69d9e417-a568-4621-9c91-122552df9a77" path="/var/lib/kubelet/pods/69d9e417-a568-4621-9c91-122552df9a77/volumes" Feb 19 00:32:07 crc kubenswrapper[4825]: I0219 00:32:07.390135 4825 generic.go:334] "Generic (PLEG): container finished" podID="d9f898f5-69ab-4e0d-8ec0-6781afb14d6e" containerID="168b8a15e18500982b1e582fddefc5596333badbfe05dbe9c88caf634ced5904" exitCode=0 Feb 19 00:32:07 crc kubenswrapper[4825]: I0219 00:32:07.390221 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxx4b" event={"ID":"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e","Type":"ContainerDied","Data":"168b8a15e18500982b1e582fddefc5596333badbfe05dbe9c88caf634ced5904"} Feb 19 00:32:09 crc kubenswrapper[4825]: I0219 00:32:09.413324 4825 generic.go:334] "Generic (PLEG): container finished" podID="d9f898f5-69ab-4e0d-8ec0-6781afb14d6e" containerID="5ecfb14f8564fcbadf5bc555f734df3cbed370a1b9a3c6359740bbff71537e62" exitCode=0 Feb 19 00:32:09 crc kubenswrapper[4825]: I0219 00:32:09.413548 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxx4b" event={"ID":"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e","Type":"ContainerDied","Data":"5ecfb14f8564fcbadf5bc555f734df3cbed370a1b9a3c6359740bbff71537e62"} Feb 19 00:32:10 crc kubenswrapper[4825]: I0219 00:32:10.424235 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxx4b" event={"ID":"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e","Type":"ContainerStarted","Data":"414d15c96a1964db3cfcfc36343a263eb89292627f64254d73956adf60c5f723"} Feb 19 00:32:10 crc kubenswrapper[4825]: I0219 00:32:10.454209 4825 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cxx4b" podStartSLOduration=3.01962839 podStartE2EDuration="5.454185123s" podCreationTimestamp="2026-02-19 00:32:05 +0000 UTC" firstStartedPulling="2026-02-19 00:32:07.392797239 +0000 UTC m=+1473.083763276" lastFinishedPulling="2026-02-19 00:32:09.827353972 +0000 UTC m=+1475.518320009" observedRunningTime="2026-02-19 00:32:10.447658904 +0000 UTC m=+1476.138624991" watchObservedRunningTime="2026-02-19 00:32:10.454185123 +0000 UTC m=+1476.145151170" Feb 19 00:32:16 crc kubenswrapper[4825]: I0219 00:32:16.032209 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:16 crc kubenswrapper[4825]: I0219 00:32:16.032985 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:17 crc kubenswrapper[4825]: I0219 00:32:17.071592 4825 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cxx4b" podUID="d9f898f5-69ab-4e0d-8ec0-6781afb14d6e" containerName="registry-server" probeResult="failure" output=< Feb 19 00:32:17 crc kubenswrapper[4825]: timeout: failed to connect service ":50051" within 1s Feb 19 00:32:17 crc kubenswrapper[4825]: > Feb 19 00:32:26 crc kubenswrapper[4825]: I0219 00:32:26.079645 4825 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:26 crc kubenswrapper[4825]: I0219 00:32:26.132233 4825 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:26 crc kubenswrapper[4825]: I0219 00:32:26.313211 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cxx4b"] Feb 19 00:32:27 crc kubenswrapper[4825]: I0219 00:32:27.577222 4825 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cxx4b" podUID="d9f898f5-69ab-4e0d-8ec0-6781afb14d6e" containerName="registry-server" containerID="cri-o://414d15c96a1964db3cfcfc36343a263eb89292627f64254d73956adf60c5f723" gracePeriod=2 Feb 19 00:32:27 crc kubenswrapper[4825]: I0219 00:32:27.938744 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.123029 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsdb4\" (UniqueName: \"kubernetes.io/projected/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-kube-api-access-vsdb4\") pod \"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e\" (UID: \"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e\") " Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.123295 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-catalog-content\") pod \"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e\" (UID: \"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e\") " Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.123474 4825 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-utilities\") pod \"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e\" (UID: \"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e\") " Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.124630 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-utilities" (OuterVolumeSpecName: "utilities") pod "d9f898f5-69ab-4e0d-8ec0-6781afb14d6e" (UID: "d9f898f5-69ab-4e0d-8ec0-6781afb14d6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.132410 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-kube-api-access-vsdb4" (OuterVolumeSpecName: "kube-api-access-vsdb4") pod "d9f898f5-69ab-4e0d-8ec0-6781afb14d6e" (UID: "d9f898f5-69ab-4e0d-8ec0-6781afb14d6e"). InnerVolumeSpecName "kube-api-access-vsdb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.225726 4825 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.226195 4825 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsdb4\" (UniqueName: \"kubernetes.io/projected/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-kube-api-access-vsdb4\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.264821 4825 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9f898f5-69ab-4e0d-8ec0-6781afb14d6e" (UID: "d9f898f5-69ab-4e0d-8ec0-6781afb14d6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.327848 4825 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.588711 4825 generic.go:334] "Generic (PLEG): container finished" podID="d9f898f5-69ab-4e0d-8ec0-6781afb14d6e" containerID="414d15c96a1964db3cfcfc36343a263eb89292627f64254d73956adf60c5f723" exitCode=0 Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.588757 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxx4b" event={"ID":"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e","Type":"ContainerDied","Data":"414d15c96a1964db3cfcfc36343a263eb89292627f64254d73956adf60c5f723"} Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.588790 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxx4b" event={"ID":"d9f898f5-69ab-4e0d-8ec0-6781afb14d6e","Type":"ContainerDied","Data":"1e5a75280486cb49da528e9077f7bd4939d9c3582161b2e3aa998d969783073a"} Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.588806 4825 scope.go:117] "RemoveContainer" containerID="414d15c96a1964db3cfcfc36343a263eb89292627f64254d73956adf60c5f723" Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.588802 4825 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cxx4b" Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.623457 4825 scope.go:117] "RemoveContainer" containerID="5ecfb14f8564fcbadf5bc555f734df3cbed370a1b9a3c6359740bbff71537e62" Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.634568 4825 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cxx4b"] Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.658164 4825 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cxx4b"] Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.728702 4825 scope.go:117] "RemoveContainer" containerID="168b8a15e18500982b1e582fddefc5596333badbfe05dbe9c88caf634ced5904" Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.755742 4825 scope.go:117] "RemoveContainer" containerID="414d15c96a1964db3cfcfc36343a263eb89292627f64254d73956adf60c5f723" Feb 19 00:32:28 crc kubenswrapper[4825]: E0219 00:32:28.759274 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"414d15c96a1964db3cfcfc36343a263eb89292627f64254d73956adf60c5f723\": container with ID starting with 414d15c96a1964db3cfcfc36343a263eb89292627f64254d73956adf60c5f723 not found: ID does not exist" containerID="414d15c96a1964db3cfcfc36343a263eb89292627f64254d73956adf60c5f723" Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.759327 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"414d15c96a1964db3cfcfc36343a263eb89292627f64254d73956adf60c5f723"} err="failed to get container status \"414d15c96a1964db3cfcfc36343a263eb89292627f64254d73956adf60c5f723\": rpc error: code = NotFound desc = could not find container \"414d15c96a1964db3cfcfc36343a263eb89292627f64254d73956adf60c5f723\": container with ID starting with 414d15c96a1964db3cfcfc36343a263eb89292627f64254d73956adf60c5f723 not found: ID does not exist" Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.759358 4825 scope.go:117] "RemoveContainer" containerID="5ecfb14f8564fcbadf5bc555f734df3cbed370a1b9a3c6359740bbff71537e62" Feb 19 00:32:28 crc kubenswrapper[4825]: E0219 00:32:28.763036 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ecfb14f8564fcbadf5bc555f734df3cbed370a1b9a3c6359740bbff71537e62\": container with ID starting with 5ecfb14f8564fcbadf5bc555f734df3cbed370a1b9a3c6359740bbff71537e62 not found: ID does not exist" containerID="5ecfb14f8564fcbadf5bc555f734df3cbed370a1b9a3c6359740bbff71537e62" Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.763069 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ecfb14f8564fcbadf5bc555f734df3cbed370a1b9a3c6359740bbff71537e62"} err="failed to get container status \"5ecfb14f8564fcbadf5bc555f734df3cbed370a1b9a3c6359740bbff71537e62\": rpc error: code = NotFound desc = could not find container \"5ecfb14f8564fcbadf5bc555f734df3cbed370a1b9a3c6359740bbff71537e62\": container with ID starting with 5ecfb14f8564fcbadf5bc555f734df3cbed370a1b9a3c6359740bbff71537e62 not found: ID does not exist" Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.763095 4825 scope.go:117] "RemoveContainer" containerID="168b8a15e18500982b1e582fddefc5596333badbfe05dbe9c88caf634ced5904" Feb 19 00:32:28 crc kubenswrapper[4825]: E0219 00:32:28.766817 4825 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"168b8a15e18500982b1e582fddefc5596333badbfe05dbe9c88caf634ced5904\": container with ID starting with 168b8a15e18500982b1e582fddefc5596333badbfe05dbe9c88caf634ced5904 not found: ID does not exist" containerID="168b8a15e18500982b1e582fddefc5596333badbfe05dbe9c88caf634ced5904" Feb 19 00:32:28 crc kubenswrapper[4825]: I0219 00:32:28.766851 4825 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168b8a15e18500982b1e582fddefc5596333badbfe05dbe9c88caf634ced5904"} err="failed to get container status \"168b8a15e18500982b1e582fddefc5596333badbfe05dbe9c88caf634ced5904\": rpc error: code = NotFound desc = could not find container \"168b8a15e18500982b1e582fddefc5596333badbfe05dbe9c88caf634ced5904\": container with ID starting with 168b8a15e18500982b1e582fddefc5596333badbfe05dbe9c88caf634ced5904 not found: ID does not exist" Feb 19 00:32:29 crc kubenswrapper[4825]: I0219 00:32:29.074500 4825 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9f898f5-69ab-4e0d-8ec0-6781afb14d6e" path="/var/lib/kubelet/pods/d9f898f5-69ab-4e0d-8ec0-6781afb14d6e/volumes" Feb 19 00:33:55 crc kubenswrapper[4825]: I0219 00:33:55.910784 4825 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7qppt"] Feb 19 00:33:55 crc kubenswrapper[4825]: E0219 00:33:55.911690 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d9e417-a568-4621-9c91-122552df9a77" containerName="registry-server" Feb 19 00:33:55 crc kubenswrapper[4825]: I0219 00:33:55.911706 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d9e417-a568-4621-9c91-122552df9a77" containerName="registry-server" Feb 19 00:33:55 crc kubenswrapper[4825]: E0219 00:33:55.911726 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f898f5-69ab-4e0d-8ec0-6781afb14d6e" containerName="extract-utilities" Feb 19 00:33:55 crc kubenswrapper[4825]: I0219 00:33:55.911735 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f898f5-69ab-4e0d-8ec0-6781afb14d6e" containerName="extract-utilities" Feb 19 00:33:55 crc kubenswrapper[4825]: E0219 00:33:55.911747 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f898f5-69ab-4e0d-8ec0-6781afb14d6e" containerName="registry-server" Feb 19 00:33:55 crc kubenswrapper[4825]: I0219 00:33:55.911756 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f898f5-69ab-4e0d-8ec0-6781afb14d6e" containerName="registry-server" Feb 19 00:33:55 crc kubenswrapper[4825]: E0219 00:33:55.911785 4825 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f898f5-69ab-4e0d-8ec0-6781afb14d6e" containerName="extract-content" Feb 19 00:33:55 crc kubenswrapper[4825]: I0219 00:33:55.911793 4825 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f898f5-69ab-4e0d-8ec0-6781afb14d6e" containerName="extract-content" Feb 19 00:33:55 crc kubenswrapper[4825]: I0219 00:33:55.911990 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f898f5-69ab-4e0d-8ec0-6781afb14d6e" containerName="registry-server" Feb 19 00:33:55 crc kubenswrapper[4825]: I0219 00:33:55.912024 4825 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d9e417-a568-4621-9c91-122552df9a77" containerName="registry-server" Feb 19 00:33:55 crc kubenswrapper[4825]: I0219 00:33:55.913323 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qppt" Feb 19 00:33:55 crc kubenswrapper[4825]: I0219 00:33:55.919300 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qppt"] Feb 19 00:33:56 crc kubenswrapper[4825]: I0219 00:33:56.109581 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844e72c9-d703-4aad-9d54-68eb45322983-catalog-content\") pod \"certified-operators-7qppt\" (UID: \"844e72c9-d703-4aad-9d54-68eb45322983\") " pod="openshift-marketplace/certified-operators-7qppt" Feb 19 00:33:56 crc kubenswrapper[4825]: I0219 00:33:56.109653 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw7gr\" (UniqueName: \"kubernetes.io/projected/844e72c9-d703-4aad-9d54-68eb45322983-kube-api-access-cw7gr\") pod \"certified-operators-7qppt\" (UID: \"844e72c9-d703-4aad-9d54-68eb45322983\") " pod="openshift-marketplace/certified-operators-7qppt" Feb 19 00:33:56 crc kubenswrapper[4825]: I0219 00:33:56.109684 4825 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844e72c9-d703-4aad-9d54-68eb45322983-utilities\") pod \"certified-operators-7qppt\" (UID: \"844e72c9-d703-4aad-9d54-68eb45322983\") " pod="openshift-marketplace/certified-operators-7qppt" Feb 19 00:33:56 crc kubenswrapper[4825]: I0219 00:33:56.210612 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844e72c9-d703-4aad-9d54-68eb45322983-catalog-content\") pod \"certified-operators-7qppt\" (UID: \"844e72c9-d703-4aad-9d54-68eb45322983\") " pod="openshift-marketplace/certified-operators-7qppt" Feb 19 00:33:56 crc kubenswrapper[4825]: I0219 00:33:56.210680 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw7gr\" (UniqueName: \"kubernetes.io/projected/844e72c9-d703-4aad-9d54-68eb45322983-kube-api-access-cw7gr\") pod \"certified-operators-7qppt\" (UID: \"844e72c9-d703-4aad-9d54-68eb45322983\") " pod="openshift-marketplace/certified-operators-7qppt" Feb 19 00:33:56 crc kubenswrapper[4825]: I0219 00:33:56.210709 4825 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844e72c9-d703-4aad-9d54-68eb45322983-utilities\") pod \"certified-operators-7qppt\" (UID: \"844e72c9-d703-4aad-9d54-68eb45322983\") " pod="openshift-marketplace/certified-operators-7qppt" Feb 19 00:33:56 crc kubenswrapper[4825]: I0219 00:33:56.212235 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/844e72c9-d703-4aad-9d54-68eb45322983-utilities\") pod \"certified-operators-7qppt\" (UID: \"844e72c9-d703-4aad-9d54-68eb45322983\") " pod="openshift-marketplace/certified-operators-7qppt" Feb 19 00:33:56 crc kubenswrapper[4825]: I0219 00:33:56.212351 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/844e72c9-d703-4aad-9d54-68eb45322983-catalog-content\") pod \"certified-operators-7qppt\" (UID: \"844e72c9-d703-4aad-9d54-68eb45322983\") " pod="openshift-marketplace/certified-operators-7qppt" Feb 19 00:33:56 crc kubenswrapper[4825]: I0219 00:33:56.234496 4825 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw7gr\" (UniqueName: \"kubernetes.io/projected/844e72c9-d703-4aad-9d54-68eb45322983-kube-api-access-cw7gr\") pod \"certified-operators-7qppt\" (UID: \"844e72c9-d703-4aad-9d54-68eb45322983\") " pod="openshift-marketplace/certified-operators-7qppt" Feb 19 00:33:56 crc kubenswrapper[4825]: I0219 00:33:56.253347 4825 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qppt" Feb 19 00:33:56 crc kubenswrapper[4825]: I0219 00:33:56.561944 4825 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qppt"] Feb 19 00:33:57 crc kubenswrapper[4825]: I0219 00:33:57.319808 4825 generic.go:334] "Generic (PLEG): container finished" podID="844e72c9-d703-4aad-9d54-68eb45322983" containerID="64ec039ad8364ebd9ef1ef96ca273229ed190de5f79ef2f977199000e3637df8" exitCode=0 Feb 19 00:33:57 crc kubenswrapper[4825]: I0219 00:33:57.319848 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qppt" event={"ID":"844e72c9-d703-4aad-9d54-68eb45322983","Type":"ContainerDied","Data":"64ec039ad8364ebd9ef1ef96ca273229ed190de5f79ef2f977199000e3637df8"} Feb 19 00:33:57 crc kubenswrapper[4825]: I0219 00:33:57.320082 4825 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qppt" event={"ID":"844e72c9-d703-4aad-9d54-68eb45322983","Type":"ContainerStarted","Data":"5c91b60a4a960a15e3f6681cd3b18b2068f927b5e16608f96ce733692ade6437"} var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515145455201024447 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015145455202017365 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015145451606016514 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015145451607015465 5ustar corecore